They’ll learn kubernetes on the fly

To think that making kubernetes available and giving your developers access will result in apps deployed to kubernetes, might be something that you need to rethink.

When folks are thinking about introducing kubernetes into the environment, instead of asking, “should we go with a cloud provider” or “onprem”, instead think about who it is that will be deploying things into the kubernetes environment. Will developers build containers which will be deployed into the environment? If so, will they deploy those containers into kubernetes or will another team do so on their behalf?

It is best if developers have awareness of kubernetes, if that’s the environment being used; similar to if you always deploy to windows but now you are going to be deploying to linux it would be good for the developers to have some awareness of linux.

Kubernetes tends to be a dream environment for developers to deploy to, they’ll love it, they will easily be able to perform deployment things themselves & create automation around deployments, infrastructure as code and gitops will become a modern reality, but they need to get up to speed with kubernetes for awhile before they will start to see how great it is. In the meantime they may hesitate, instead choosing to use tools they are already familiar with.

A good class will help to onboard folks to kubernetes, but a bad class might make employees want to run away. (a good intro class is the CKAD class on udemy by Mumshad Mannambeth, with a new email it should be < $30)

Kubernetes is not difficult, in the same way that writing code is not difficult, but it is a skill set. You wouldn’t hire someone without programming experience and just expect them to pick it up on the fly would you?

Careful that you don’t end up in the situation where your kubernetes administrators are the only folks who know how to deploy to kubernetes.

(idea) ssh menu and ssh menu webapi

Single linux system:
When you are using linux and need to ssh, instead of typing ssh you type ‘s’, which displays a menu of past ssh connection you can connect to, sorted by name or by most recently used, ability to adjust sort, edit, delete, etc.

Multiple linux systems:
I’m not the first to have this idea, there are a few sshmenu programs out there. But what if we were to create a webapi (and host it via kubernetes of course), with OIDC enabled, then on any linux system when you type ‘s’ it will ask for the sshmenu server, or use a default server from a config file, and pop up a web browser for you to login, obtain an apikey, and use that apikey from that point on to display your history off ssh connection on all your linux systems (keeping the sshmenu in sync on all of them). Could have a mobile app and web browser app as well.

Enterprise ready:
Using groups claims it would be possible to have admin users and regular users of course, but also a user could be added to a group and then they’ll also be able to see all of the group’s shared ssh connections, so a team could share out all their common ssh connections.

(bash) script to fully clean up a kubernetes app

When experimenting with something like ceph, installing it, making changes, uninstalling and reinstalling … you will find that more advanced apps tend to implement finalizers making a full uninstall rather challenging.  More complex apps tend to have an uninstaller script for just this reason.  When lacking such a script though, here is a generic script which can take care of a lot, or all of the clean up work:

** Note, you are entering a danger zone. **

#!/bin/bash

if [ "$1" == "" ]
then
  echo "Syntax:"
  echo ""
  echo "$0 <searchstr>"

  exit 1
fi

CRDS=`kubectl get crd --no-headers | grep $1 | cut -d ' ' -f 1`

array_crds=( $CRDS )
for crd in "${array_crds[@]}"
do
  echo ""
  echo $crd
  RESOURCES=`kubectl get $crd --no-headers | grep $1 | cut -d ' ' -f 1`
  array_resources=( $RESOURCES )
  for next in "${array_resources[@]}"
  do
    echo $next
    kubectl patch $crd $next -p '{"metadata":{"finalizers":null}}' --type=merge
    kubectl delete $crd $next
  done
done

(gitops) argocd phoenix configuration: clusterapi with vcluster provider

Standardized git repo layouts helps to keep deployments consistent and clean:

k-argocd
- /appofapps/clusters/application.yaml
- /apps
  - /argocd-seed/update.sh
  - /argocd/applicationset.yaml
  - /clusterapi/applicationset.yaml
  - /daytwo/applicationset.yaml
- /projects
  - /addons.yaml
  - /developer.yaml
  - /devsecops.yaml

k-argocd-addons
- /apps
  - /adcs-issuer-system/applicationset.yaml
  - /adcs-issuer-system/base/Chart.yaml
  - /cert-manager/applicationset.yaml
  - /external-dns/applicationset.yaml
  - /external-dns-root/applicationset.yaml
  - /fluent-bit/applicationset.yaml
  - /kasten/applicationset.yaml
  - /nginx-ingress/applicationset.yaml
  - /metrics-server/applicationset.yaml
  - /pinniped-concierge/applicationset.yaml
  - /prometheus/applicationset.yaml

k-argocd-clusters
- /clusters
  - /vc-non.yaml
  - /vc-prod.yaml

k-vc-non
- /appofapps
  - /namespaces/application.yaml
- /apps
  - /example/applicationset.yaml
  - /example/base/Chart.yaml
- /namespaces
  - /example/namespace.yaml
  - /example/resourcequota.yaml
  - /example/servicemesh.yaml

k-vc-prod
- /appofapps
  - /namespaces/application.yaml
- /apps
  - /example/applicationset.yaml
  - /example/base/Chart.yaml
- /namespaces
  - /example/namespace.yaml
  - /example/resourcequota.yaml
  - /example/servicemesh.yaml

daytwo automates several steps needed when first deploying clusters:

  • register cluster with argocd, also adds annotation allowing applications to target by cluster name
  • copy labels from cluster yaml to argocd secret, useful for deploying addons
  • generates pinniped kubeconfig, allows for initial access without needing admin kubeconfig
  • registers as a kasten secondary cluster, (if kasten is being used)

Scripts / pipelines are needed to:

  • provision / decommission a cluster
    • adjust cluster resources
  • add / remove a namespace
    • adjust namespace resource quota
    • grant developers access to namespaces

(idea) restarting a pod via webapi

At the end of a pipeline it can be nice to restart a pod.  In production this occurs via gitops automatically, but in dev it can be nice to not have to wait for git to sync.  Though there are multiple ways to do this common ways are:

  1. just put the cluster kubeconfig in a secret in the pipeline and use that to restart the pod (a little too powerful)
  2. create a service account, acquire its token and place in pipeline (safest, but takes some work for each app)

What about a solution similar to reloader?  Reloader watches for changes in configmaps and secrets which have a certain annotation and if a they change reloader restarts things.  We could just use reloader & make a change to a configmap in order to trigger the reload.

However, what about creating a controller which listens for a webapi call asking for a deployment to be restarted.  Then a pipeline could call the appropriate url to get things restarted.  By deploying via argocd using an applicationset and using a url convention based on the clustername then all development clusters could be enabled to used this method in their pipelines, consumers would only need to annotate their deployments/statefulset/etc …

daytwo is almost ready for beta testing (argocd-daytwo)

Ability to use FQDN to access workload clusters, rather than ip address

Doing my part to try and help gitops take over the world.

By targeting applications to install on a kubernetes cluster by fqdn, rather than ip address, it becomes possible to delete a cluster and stand up a new cluster, and watch as all the applications reinstall automatically to the new cluster via gitops using something like argocd.

Feature request submitted (vcenter clusterapi provider):
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/2098

If implemented, this would enable all consumers of vmware vcenter / tanzu to better implement gitops.

Tempting to drop everything and be the one to implement the code and submit a pull request, just to get the credit.  Would feel good to know you’ve helped so many people.

kubernetes daytwo controllers

Thinking again about daytwo operations (controllers watching for cluster events):

daytwo-argocd-register-controller

  • watches for cluster.yaml (tanzu, clusterapi, etc…) and registers clusters with argocd automatically once they are in ‘ready’ state
  • syncs ‘addons’ labels from cluster.yaml to argocd cluster secrets to auto install addons, including pinniped-concierge, and pinniped-www

daytwo-pinniped-register-controller

  • generates a pinniped kubeconfig if new cluster
  • adds to git repo if new cluster, removes from git repo if cluster decommissioned
  • regenerates configmap used by pinniped-www

daytwo-external-dns-register-controller

  • watch for service associated with cluster to appear & annotate with fqdn, goal: to add a dns entry for each cluster kubeapi

Or perhaps, if desired, the event could trigger automation somewhere else:

daytwo-cluster-event-controller

  • callback to jenkins
  • callback to awx
  • callback to vmware-aria
  • etc …

argocd & pinniped

Kubernetes implements OIDC via arguments on the API Server such as –oidc-issuer-url, etc. This works great, however a common utility to access kubernetes in this way kubelogin isn’t so great in that an open web browser closed in the wrong way can cause the authentication to fail. So what’s a better solution?

Pinniped allows the configuring of OIDC and login on a running kubernetes cluster. This is great in that you don’t have to restart the cluster to get OIDC to work, and sometimes you just don’t have access to configure OIDC via the API Server.

Pinniped can be configured and deployed in the usual way that you deploy addons to your clusters. (see argocd – getting started, and argocd – getting started – addons)

Whether using the native OIDC solution or using pinniped, a kubeconfig must be created which can be used with kubectl to access the clusters. Pinniped has a utility to create the kubeconfig, but how to get to the end user? There have been various methods to accomplish this:

  • the user submits a ticket, through automation you send them the pinniped kubeconfig via email
  • all pinniped kubeconfig files are made available via a git repo all clients have access to
  • (here’s our idea) make the pinniped kubeconfig available via a website, so we don’t need to grant git repo access in order to deliver the kubeconfig

My strategy has been to use argocd, applicationsets, and a single container running on all clusters as a means to distribute the pinniped kubeconfig to all interested parties.

* Note: pinniped kubeconfigs contain no secrets, they do not need to be treated as secrets, other than as related to a potential denial of service attack, because the user will still need to login and access is granted via groups.

Here’s the implementation:

  • argocd applicationsets allow the use cluster name to be used as a variable, this means we can define the url we want in each cluster to be: https://pinniped.<cluster>.<domain>
  • the applicationset deploys a container running php which uses a script to determine the clustername. The php site displays some information to help the user get started with pinniped and a link to the pinniped kubeconfig
  • a configmap is mounted using a volume to the container and contains all pinniped kubeconfig files, this works because the url generated in the previous step includes the name of the cluster and so points to the correct pinniped kubeconfig
  • because this solution is deployed via argocd, when a new cluster is deployed and a pinniped kubeconfig is created & added to a git repo, a script re-generates the configmap from the git repo, all pinniped websites are updated automatically, we just have to add a step to update the pinniped configmap as part of the new cluster deployment process

(video: todo)