more kubernetes magic
OIDC is always preferred if possible. At this time in history not all projects have OIDC support, though some can be extended via an extension or plugin to accomplish the goal. I’ve got enough experience to help projects get over this hurdle and get OIDC working. If I could be paid just to help out open source projects I might go for it.
Here’s a pull request for a taiga helm chart I’ve been using. I’ve been using taiga for years via docker and am happy to be able to help out in this way now that I’m using kubernetes and helm charts. In this case a borrowed a technique from a nextcloud helm chart and works perfectly for this taiga helm chart: https://github.com/nemonik/taiga-helm/pull/6
Traditionally Shinto shrines are rebuilt exactly the same next to the old shrine every so many years. The old shrine is removed and when the time comes it will be rebuilt again.
Something similar can apply to home environments. Recently I nuked everything and rebuilt from the ground up. Something I’ve always done after 6 months or a year, for security reasons and to ensure I am always getting the fastest performance from infrastructure.
Such reinstalling is a natural fit for kubernetes. There are several methods for spinning up a cluster, and after that just by the nature of kubernetes being yaml files it is easy to spin up the services you had running before, and watch them self register in the new dns and self generate certificates with the new active directory certificate authority. Amazing. Kubernetes is truly, a work of art.
As I take the deep dive into kubernetes what I’m finding is, though definitely a container management system, it can also been seen as a controller yaml processing engine. Let me explain.
Kubernetes understands what a deployment is, and what a service is, these are defined as yaml and loaded. Deployments and services can be seen as controllers which understand those types of objects defined in yaml.
What is interesting about this is that we can implement our own controllers. For example, I could implement a controller that understands how to manage a tic-tac-toe game. That controller could also implement an ai that knows how to play the game. In the same way you can edit a deployment you could edit the game and the kubernetes infrastructure could respond to the change. Or, a move could be another type recognized by the game controller, so you could create a move associated with a game in the same way you can create a service associated with a deployment.
You can imagine doing a ‘k get games’ and seeing the games being played listed out. As well as ‘k describe game a123’ to get the details and status of the game.
Seems I’m not the only one who has started thinking down this line. A quick Google search reveals agones.
This is fascinating and gives me a lot of ideas on how I might reimplement my list processing server & generic game server, within the kubernetes framework.
My first helm chart, a fun milestone. Used it to install my new docker container uploaded to quay.io this morning.
Nice feeling to give back to the open source community.
Now to automate:
* watch for wireguard updates & release an updated docker image
* watch for a centos-8-stream update & release an updated docker image
* watch for a helm chart update & update what is necessary for those changes to be seen
But first, time to investigate and implement longhorn.
A wireguard container built for centos-8-stream which takes advantage of the scripts from the linuxserver docker-wireguard project.
LinuxServer docker-wireguard project: https://github.com/linuxserver/docker-wireguard
To use simply replace the docker-wireguard image with: quay.io/lknight/docker-wireguard-centos-8-stream:latest
Note: Initial startup may take quite awhile, 4 minutes +, if the wireguard module is being recompiled. Be sure to use a volume for the modules folder to avoid having to recompile.
The world of Kubernetes is exactly why I got into computers back in the day, always something new to learn, the fun of problem solving, and being rewarded with new capabilities.
Just initially getting up to speed with an udemy class, setting up my first clusters was so fun and rewarding then, learning about metallb, nginx-ingress, metrics, cert-manager, adding an nfs provisioner, and later a cifs share. Setting up ip ranges for the clusters and reinstalling everything, then automating the process.
Along the way you are moving over all your existing services and discover helm exists and the services you want are already out there and easily installed… only to discover hey, this project may be somewhat bleeding edge and I can contribute code, already giving back to the community… how cool!
Each time a vm is shutdown and resources are recovered, victory!
Such a bummer when you have a service that doesn’t want to run in kube. So far only one, wireguard not wanting to run on top of a centos 8 stream based cluster… but I can drive a solution for us.
It is a bit all consuming, I’d like to get back to developing projects, and moving them into my new clusters. It’s almost time though, almost time, feeling so empowered with all the possibilities, exciting!!!
Update: I got wireguard working. Working to get that shared out with the open source community. Is there anything kubernetes can’t do?
With kubernetes comes load balancing, and with load balancing comes a need for a range of ip addresses dedicated to load balancing. Besides load balancing, just for keeping things organized with multiple clusters separate ip ranges can be nice. This article will let you create this type of multi-cluster setup each with their own ip range:
Using your preferred virtualization technology create a private virtual switch, here we create using powershell on an Hyper-v server:
New-VMSwitch -SwitchName “k-dev” -SwitchType Private
|controlplane master 01
|controlplane master 02
|worker node 01
|worker node 02
|Use as specified above (or similar)
|If using as specified above then 255.255.255.0
|Use the ip of your normal lan dns. In this setup all vms on the private lan
can communicate with all systems on your local lan and vice versa via the
|The ip of the VM with two nics on the private lan, if using the ips above then
this will be 192.168.100.10
This part is tricky in that I cannot tell you how to configure your router. In most cases even the most basic router that you might be using to connect your local lan to the Internet will have an option to configure a static route.
You’ll most likely have two options:
Add an A record for each host in DNS with its static ip used on the private ip range.
Depending on your router, (Netgear R6220 does not require this, RAX20 does, etc…), you may need to adjust the subnet of your router. The ip of a router is often 192.168.0.1 with subnet mask 255.255.255.0, since you are now using 192.168.100.0/24 for your private lan it may be necessary to adjust the subnet mask to 255.255.0.0 (or similar) to accommodate the 100 range.
You might need this step if your private network can reach the lan, the lan can reach your private lan, but your private lan cannot reach through the router out to the internet.
Now you should be able to reach all systems on the private lan via systems on your local lan as well as the other way around, as well as Internet access. You should be able to join your private lan systems to the domain you are running on your local lan, if you have one, and log in without a password using kerberos, if you have that setup. Enjoy!
The article you just read is a bit rare on the Internet. I suspect this is because most folks who would like to add a private lan in this way to be fully connected to an existing ip range lack the experience with route to do so, and so you may be pushing your limits. If things are not working for you try not to be too hard on yourself and: