and then we have argocd, with its ability to restore an entire cluster, even multiple clusters simultaneously simply due to its inherent declarative nature

more kubernetes magic


Helm chart check list – work flow from dev to prod (always evolving)

Helm chart setup workflow


  1. Initially get things working with a default helm install into dev.
  2. Does it have ldap or oidc integration or some other reason to need to verify onprem CA chain? Yes, figure out how to get CA chain installed
  3. Setup oidc if possible, setup ldap if needed and oidc is not available.
  4. Is it possible to set values in helm chart to get CA chain installed as part of the helm install? Yes, modify yaml, no fork helm install and add steps, see if project owner is OK with pull request or not. (better in Integrate with helm chart than manage a fork).
  5. Configure something on server you want to persist.
  6. Helm uninstall and reinstall, did you lose the setting? … figure out steps to get data to persist.
  7. Try to increase the storage size of PVCs which might need to be increased in production?  Is this possible without taking down the application?  Figure out what is required for this use case which will inevitably come up in the future.  Document and be ready.  It may be wise to implement a pipeline for this purpose.
  8. Cordon involved node(s) and drain, uncordon node(s), was data lost when things came back up? Figure out how to ensure data persists.
  9. Does server have a method to export configuration or otherwise backup the server. Configure automated backups.
  10. Is it possible to Configure server as HA? Can it be configured for HA later or must it be configured from initial setup? Can a single instance be migrated to HA. Decide if HA needs to be setup or is a single instance good enough. If HA is desired then figure out how to set that up and go through this list again.
  11. Are there options to configure metrics for the application?  Often these exist in helm installs.  (lower priority when initially working to get something up)
  12. If there is an option to use a log aggregator set that up or possibly setup a sidecar with logging.  (lower priority when initially working to get something up)
  13. Server is now ready to release into test.


  1. Configure permissions of those with accounts accessed via oidc / ldap. Note a program which supports ldap but not oidc is not as evolved. Check for a plugin/extension if oidc does not appear to be available. Oidc enables almost any identity provider & SSO, and is always preferred.
  2. Does the minimum requested CPU and Memory match what’s actually needed?
  3. Someone needs to perform some manual testing or work on automated testing.
  4. If no one ever tries restoring from a backup, there is a good chance the process might not work, might want to try that out before there is a fire.
  5. No system may be released into production without an automated method of registering its ip in dns (e.g. external-dns) and also an automated method of updating its ssl certificates (e.g. cert-man), verify these work.
  6. Be sure to test rolling up to the next release of a helm chart as well as rolling back (and all tests still pass).
  7. If all testing passes then ready for production.


  1. An update strategy needs to be established and followed just prior to release into production. Schedule: Monthly, quarterly, every 6 months, or upon release of a new version. Version: always run the latest, or version just prior to latest major release (and with all the updates).  Some programs such as WordPress can/will update plugins automatically… is this ok?
  2. Generally automation is desired to roll something out into production. When an update is ready automation should be used to update first in dev, perform automated testing, then roll out into dev with the ok of someone (or automatically rolled out into prod if all tests passed and its decided that is good enough).
  3. Also, a pipeline for rolling back to a previous version is a good idea, in case a deployment to production fails.

(pull request) Contributing to open source, helm chart for taiga, ability to import an on prem certificate authority certificate chain

OIDC is always preferred if possible.  At this time in history not all projects have OIDC support, though some can be extended via an extension or plugin to accomplish the goal.  I’ve got enough experience to help projects get over this hurdle and get OIDC working.  If I could be paid just to help out open source projects I might go for it.

Here’s a pull request for a taiga helm chart I’ve been using.  I’ve been using taiga for years via docker and am happy to be able to help out in this way now that I’m using kubernetes and helm charts.  In this case a borrowed a technique from a nextcloud helm chart and works perfectly for this taiga helm chart:

Like rebuilding a Shinto shrine

Traditionally Shinto shrines are rebuilt exactly the same next to the old shrine every so many years.  The old shrine is removed and when the time comes it will be rebuilt again.

Something similar can apply to home environments.  Recently I nuked everything and rebuilt from the ground up.  Something I’ve always done after 6 months or a year, for security reasons and to ensure I am always getting the fastest performance from infrastructure.

Such reinstalling is a natural fit for kubernetes.  There are several methods for spinning up a cluster, and after that just by the nature of kubernetes being yaml files it is easy to spin up the services you had running before, and watch them self register in the new dns and self generate certificates with the new active directory certificate authority.  Amazing.   Kubernetes is truly, a work of art.

What is Kubernetes really?

As I take the deep dive into kubernetes what I’m finding is, though definitely a container management system, it can also been seen as a controller yaml processing engine.  Let me explain.

Kubernetes understands what a deployment is, and what a service is, these are defined as yaml and loaded.  Deployments and services can be seen as controllers which understand those types of objects defined in yaml.

What is interesting about this is that we can implement our own controllers.  For example, I could implement a controller that understands how to manage a tic-tac-toe game.  That controller could also implement an ai that knows how to play the game.  In the same way you can edit a deployment you could edit the game and the kubernetes infrastructure could respond to the change.  Or, a move could be another type recognized by the game controller, so you could create a move associated with a game in the same way you can create a service associated with a deployment.

You can imagine doing a ‘k get games’ and seeing the games being played listed out.  As well as ‘k describe game a123’ to get the details and status of the game.

Seems I’m not the only one who has started thinking down this line.  A quick Google search reveals agones.

This is fascinating and gives me a lot of ideas on how I might reimplement my list processing server & generic game server, within the kubernetes framework.

New helm chart: wireguard-centos-8-stream

My first helm chart, a fun milestone.  Used it to install my new docker container uploaded to this morning.

Nice feeling to give back to the open source community.

Now to automate:
* watch for wireguard updates & release an updated docker image
* watch for a centos-8-stream update & release an updated docker image
* watch for a helm chart update & update what is necessary for those changes to be seen

But first, time to investigate and implement longhorn.

New container: docker-wireguard-centos-8-stream

A wireguard container built for centos-8-stream which takes advantage of the scripts from the linuxserver docker-wireguard project.


LinuxServer docker-wireguard project:

To use simply replace the docker-wireguard image with:

Note: Initial startup may take quite awhile, 4 minutes +, if the wireguard module is being recompiled. Be sure to use a volume for the modules folder to avoid having to recompile.

Kubernetes, a hacker’s paradise (the good kind of hacker)

The world of Kubernetes is exactly why I got into computers back in the day, always something new to learn, the fun of problem solving, and being rewarded with new capabilities.


Just initially getting up to speed with an udemy class, setting up my first clusters was so fun and rewarding then, learning about metallb, nginx-ingress, metrics, cert-manager, adding an nfs provisioner, and later a cifs share.  Setting up ip ranges for the clusters and reinstalling everything, then automating the process.

Along the way you are moving over all your existing services and discover helm exists and the services you want are already out there and easily installed… only to discover hey, this project may be somewhat bleeding edge and I can contribute code, already giving back to the community… how cool!

Each time a vm is shutdown and resources are recovered, victory!


Such a bummer when you have a service that doesn’t want to run in kube.  So far only one, wireguard not wanting to run on top of a centos 8 stream based cluster… but I can drive a solution for us.

It is a bit all consuming, I’d like to get back to developing projects, and moving them into my new clusters.  It’s almost time though, almost time, feeling so empowered with all the possibilities, exciting!!!

Update: I got wireguard working. Working to get that shared out with the open source community. Is there anything kubernetes can’t do?

A separate IP range for each local kubernetes cluster


With kubernetes comes load balancing, and with load balancing comes a need for a range of ip addresses dedicated to load balancing. Besides load balancing, just for keeping things organized with multiple clusters separate ip ranges can be nice. This article will let you create this type of multi-cluster setup each with their own ip range:


Create private virtual switch

Using your preferred virtualization technology create a private virtual switch, here we create using powershell on an Hyper-v server:

New-VMSwitch -SwitchName “k-dev” -SwitchType Private

Create vms to use with new cluster

k-dev-mhaproxy server192.168.100.10
k-dev-m01controlplane master 01192.168.100.11
k-dev-m02controlplane master 02192.168.100.12
k-dev-n01worker node 01192.168.100.21
k-dev-n02worker node 02192.168.100.22

Configure gateway system

  • Add second nic attached to lan network. Use a static ip, in this setup we will use
  • Enable ip forwarding:
    • sudo echo “net.ipv4.ip_forward=1” >> /etc/sysctl.conf
    • sudo sysctl -p
  • Disable default gateways by modifying /etc/sysconfig/network-scripts/ifcfg-eth# and commenting out the GATEWAY values (CentOS, if not CentOS use distribution specific method)
  • Add a default route which uses the local lan default gateway (use distribution specific method to make this permanent, my lan gateway is, adjust if yours is different)
    • route add default gw metric 25
  • Add a route which uses the local lan for traffic specific to the local lan, this is required to avoid an asymmetric routing issue. Packets headed to the private lan from the local lan may not use the same path without it (if you are seeing a 60 second timeout with all tcp connections this is why):
    • route add -net netmask gw metric 25
  • Note, the route commands above have a metric of 25, this needs to be lower than all other route metrics so they are used first so if your metrics are lower then adjust accordingly.

Configure VMs on private network

ip addrUse as specified above (or similar)
netmaskIf using as specified above then
Use the ip of your normal lan dns. In this setup all vms on the private lan
can communicate with all systems on your local lan and vice versa via the
gatewayThe ip of the VM with two nics on the private lan, if using the ips above then
this will be

Configure your lan router with a static route

This part is tricky in that I cannot tell you how to configure your router. In most cases even the most basic router that you might be using to connect your local lan to the Internet will have an option to configure a static route.

You’ll most likely have two options:

  • Configure a single static route to the whole subnet such as:
    • via
  • Configure a single static route for each ip address such as:
    • netmask via
    • netmask via
    • netmask via
    • netmask via
    • netmask via

Configure DNS

Add an A record for each host in DNS with its static ip used on the private ip range.

Adjust subnet used by router

Depending on your router, (Netgear R6220 does not require this, RAX20 does, etc…), you may need to adjust the subnet of your router. The ip of a router is often with subnet mask, since you are now using for your private lan it may be necessary to adjust the subnet mask to (or similar) to accommodate the 100 range.

You might need this step if your private network can reach the lan, the lan can reach your private lan, but your private lan cannot reach through the router out to the internet.

You are done!

Now you should be able to reach all systems on the private lan via systems on your local lan as well as the other way around, as well as Internet access. You should be able to join your private lan systems to the domain you are running on your local lan, if you have one, and log in without a password using kerberos, if you have that setup. Enjoy!


The article you just read is a bit rare on the Internet. I suspect this is because most folks who would like to add a private lan in this way to be fully connected to an existing ip range lack the experience with route to do so, and so you may be pushing your limits. If things are not working for you try not to be too hard on yourself and:

  • disable the firewall & selinux on the gateway system till you get things working
  • use traceroute or tracepath from both the client system on your local lan and from the vm on the private lan and ensure they are walking the same path
  • if you used dhcp to load your vms ensure there is no remaining dhcp lease, if there is it will keep altering your static ip record in dns until it expires

Good Luck!