Ability to use FQDN to access workload clusters, rather than ip address

Doing my part to try and help gitops take over the world.

By targeting applications to install on a kubernetes cluster by fqdn, rather than ip address, it becomes possible to delete a cluster and stand up a new cluster, and watch as all the applications reinstall automatically to the new cluster via gitops using something like argocd.

Feature request submitted (vcenter clusterapi provider):
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/2098

If implemented, this would enable all consumers of vmware vcenter / tanzu to better implement gitops.

Tempting to drop everything and be the one to implement the code and submit a pull request, just to get the credit.  Would feel good to know you’ve helped so many people.

Creating webapps w/ flutter & deploying to kubernetes

In the past I’ve written guis in many languages, but found my passion more in backend server apis.  Along this line REST, swagger, OIDC, and websockets have been a dream come true.

Then one day I discovered flutter, created by google, it is cross platform compiling natively on both Android and iPhone.

Flutter changed everything for me, it made me love creating guis, it felt like the first real solution to creating a gui with everything else prior being trail blazer projects helping to figure out everything needed to one day lead to the creation of flutter.

Turns out it also can be used to create windows & linux apps as well, with some exceptions.  On a mobile device you get a free database with your application, so if you deploy on windows / linux you have to solve the database on your own.

Today I noticed that flutter can also be used to create a webapp.  I’m thinking to create a webapp, place it inside a container and deploy it to kubernetes along with a database into the same namespace, making for a similar situation to how mobile apps get a database.  So much potential, this is exciting.

kubernetes daytwo controllers

Thinking again about daytwo operations (controllers watching for cluster events):

daytwo-argocd-register-controller

  • watches for cluster.yaml (tanzu, clusterapi, etc…) and registers clusters with argocd automatically once they are in ‘ready’ state
  • syncs ‘addons’ labels from cluster.yaml to argocd cluster secrets to auto install addons, including pinniped-concierge, and pinniped-www

daytwo-pinniped-register-controller

  • generates a pinniped kubeconfig if new cluster
  • adds to git repo if new cluster, removes from git repo if cluster decommissioned
  • regenerates configmap used by pinniped-www

daytwo-external-dns-register-controller

  • watch for service associated with cluster to appear & annotate with fqdn, goal: to add a dns entry for each cluster kubeapi

Or perhaps, if desired, the event could trigger automation somewhere else:

daytwo-cluster-event-controller

  • callback to jenkins
  • callback to awx
  • callback to vmware-aria
  • etc …

argocd & pinniped

Kubernetes implements OIDC via arguments on the API Server such as –oidc-issuer-url, etc. This works great, however a common utility to access kubernetes in this way kubelogin isn’t so great in that an open web browser closed in the wrong way can cause the authentication to fail. So what’s a better solution?

Pinniped allows the configuring of OIDC and login on a running kubernetes cluster. This is great in that you don’t have to restart the cluster to get OIDC to work, and sometimes you just don’t have access to configure OIDC via the API Server.

Pinniped can be configured and deployed in the usual way that you deploy addons to your clusters. (see argocd – getting started, and argocd – getting started – addons)

Whether using the native OIDC solution or using pinniped, a kubeconfig must be created which can be used with kubectl to access the clusters. Pinniped has a utility to create the kubeconfig, but how to get to the end user? There have been various methods to accomplish this:

  • the user submits a ticket, through automation you send them the pinniped kubeconfig via email
  • all pinniped kubeconfig files are made available via a git repo all clients have access to
  • (here’s our idea) make the pinniped kubeconfig available via a website, so we don’t need to grant git repo access in order to deliver the kubeconfig

My strategy has been to use argocd, applicationsets, and a single container running on all clusters as a means to distribute the pinniped kubeconfig to all interested parties.

* Note: pinniped kubeconfigs contain no secrets, they do not need to be treated as secrets, other than as related to a potential denial of service attack, because the user will still need to login and access is granted via groups.

Here’s the implementation:

  • argocd applicationsets allow the use cluster name to be used as a variable, this means we can define the url we want in each cluster to be: https://pinniped.<cluster>.<domain>
  • the applicationset deploys a container running php which uses a script to determine the clustername. The php site displays some information to help the user get started with pinniped and a link to the pinniped kubeconfig
  • a configmap is mounted using a volume to the container and contains all pinniped kubeconfig files, this works because the url generated in the previous step includes the name of the cluster and so points to the correct pinniped kubeconfig
  • because this solution is deployed via argocd, when a new cluster is deployed and a pinniped kubeconfig is created & added to a git repo, a script re-generates the configmap from the git repo, all pinniped websites are updated automatically, we just have to add a step to update the pinniped configmap as part of the new cluster deployment process

(video: todo)

What is the best way to handle expiring certificate authority?

So far, the best solution I’ve found:
  • manually store the ca-bundle in a vault, such as azure vault or hashicorp vault
  • configure external secrets addon to use the vault
  • configure ca-bundle as volume mount using secret provided by external secrets in each pod that needs it
Now when the ca expires you only have to update the ca-bundle in a single location in the vault, argocd will take care of restarting the pods when the secret updates.

If anyone has a better idea feel free to share.

k8s daytwo operation ideas

controllers:

  • watch for cluster.yaml updates & reflect addons annotations from cluster.yaml to argocd cluster yaml secrets
  • watch argocd repo certs and check for expire, re-adding automatically
  • tanzu watch for service associated with cluster to appear & annotate with fqdn automatically (in order to add dns entry for each cluster kubeapi)
  • watch for certificate authority expiration and update ‘ca-bundle’ stored in vault

Idea: repo-manager

Why

In a world of containers, developers need to have multiple linux repos mirrored on prem for use when building or modifying images.

What

Similar to how cert-manager works with providers to extend functionality, repo-manager can be extended to mirror additional linux distros.

How

repo-manager provides an operator-type controller which watches for the CRD type ‘mirror.aarr.xyz’ & manages for each repo mirror:

  • the deployment of a pod for mirroring
  • a pvc for each pod
  • the increasing of the pvc size as needed
  • ingress configuration to reach each repo using a subpath http(s)://mirror.<fqdn>/<path>
  • a status using kubectl via displaying relevant CRDs
  • mapping of a ca-bundle
  • repo-specific settings

Also

  • an overall web interface
  • settings which apply to all repo mirrors

gge ingresses, a view into the world of kubernetes

(gge) This is so fun, taking advantage of kubernetes cronjobs …

The generic game engine will host tournaments.

I was planning on daily and weekly tournaments that everyone can participate in, but also allow folks to create their own.  Figured I might be able to take care of scheduling the tournaments using something in kubernetes and I was not disappointed!

The workflow goes like this:

  • create a tournament instance
  • create a cronjob with the tournament start time
  • when the cronjob starts, it creates a pod that runs curl and accesses an internal-only tournament api to start the tournament
  • tournaments will only run once by default but by setting a value will repeat on the cronjob schedule

Just have to make sure to delete the cronjob along with the tournament.  If someone were to delete the tournament using kubectl there is a chance of an orphaned cronjob (oh no) if the tournament controller isn’t running to catch it, otherwise the tournament controller will receive a DELETE event and go ahead and delete the cronjob.  Will have to run a cleanup process now and then to look for orphaned cronjobs, wonder if there might be a convenient way to do that???

Generic Boardgame Game Engine is coming along, its playable …

Putting this together was so much fun, and surprisingly quick to implement.
Still got a few more features to add, let’s just call this the teaser trailer …

$ k get all
NAME                                                               READY   STATUS    RESTARTS   AGE
pod/gge-controller-tournament-single-elimination-db6b6f474-fxg2c   1/1     Running   0          3d2h
pod/gge-game-77767987b5-z6kt8                                      1/1     Running   0          2d9h
pod/gge-tournament-6557567558-k4hfl                                1/1     Running   0          2d9h
pod/gge-gateway-c49fb8549-6wwc6                                    1/1     Running   0          2d9h
pod/gge-game-template-79c4bf84fc-54rng                             1/1     Running   0          2d6h
pod/gge-tournament-template-854f5c4475-txvmb                       1/1     Running   0          2d6h
pod/gge-controller-game-tic-tac-toe-576cd5d665-b4fzc               1/1     Running   0          2d3h
pod/gge-auth-5b4b8bb885-lp7j7                                      1/1     Running   0          2d2h

NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/gge-auth                  ClusterIP   10.97.159.55     <none>        80/TCP    4d1h
service/gge-gateway               ClusterIP   10.96.99.201     <none>        80/TCP    4d1h
service/gge-game                  ClusterIP   10.100.145.200   <none>        80/TCP    4d1h
service/gge-tournament            ClusterIP   10.109.47.239    <none>        80/TCP    2d12h
service/gge-tournament-template   ClusterIP   10.102.242.111   <none>        80/TCP    2d12h
service/gge-game-template         ClusterIP   10.102.177.107   <none>        80/TCP    2d11h

NAME                                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/gge-controller-tournament-single-elimination   1/1     1            1           3d5h
deployment.apps/gge-game                                       1/1     1            1           4d1h
deployment.apps/gge-tournament                                 1/1     1            1           2d12h
deployment.apps/gge-gateway                                    1/1     1            1           4d1h
deployment.apps/gge-game-template                              1/1     1            1           2d11h
deployment.apps/gge-tournament-template                        1/1     1            1           2d12h
deployment.apps/gge-controller-game-tic-tac-toe                1/1     1            1           3d4h
deployment.apps/gge-auth                                       1/1     1            1           4d1h

NAME                                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/gge-controller-tournament-single-elimination-db6b6f474   1         1         1       3d5h
replicaset.apps/gge-game-77767987b5                                      1         1         1       4d1h
replicaset.apps/gge-tournament-6557567558                                1         1         1       2d12h
replicaset.apps/gge-gateway-c49fb8549                                    1         1         1       4d1h
replicaset.apps/gge-game-template-79c4bf84fc                             1         1         1       2d11h
replicaset.apps/gge-tournament-template-854f5c4475                       1         1         1       2d12h
replicaset.apps/gge-controller-game-tic-tac-toe-576cd5d665               1         1         1       3d4h
replicaset.apps/gge-auth-5b4b8bb885                                      1         1         1       4d1h

NAME                                         ENABLED   SHORT                LONG                 PATH
tournament.gge.aarr.xyz/single-elimination   true      single-elimination   Single Elimination   /tournament/single-elmination
tournament.gge.aarr.xyz/double-elimination   true      double-elimination   Double Elimination   /tournament/double-elmination
tournament.gge.aarr.xyz/round-robin          true      round-robin          Round-Robin          /tournament/round-robin

NAME                             ENABLED   SHORT          LONG           PATH
game.gge.aarr.xyz/connect-four   true      connect-four   Connect Four   /game/connect-four
game.gge.aarr.xyz/2048           true      2048           2048           /game/2048
game.gge.aarr.xyz/reversi        true      reversi        Reversi        /game/reversi
game.gge.aarr.xyz/tic-tac-toe    true      tic-tac-toe    Tic-Tac-Toe    /game/tic-tac-toe
game.gge.aarr.xyz/tripletown     true      tripletown     Triple Town    /game/tripletown