Using kubeadm to setup cluster using centos 9 stream.

Is centos 9 stream a good choice? (sure)

I may end up switching to Sidero to setup and manage my onprem clusters, but for now I am continuing with centos, and moving from 8 to 9 so that I can use the wireguard module that comes with 9.  After several failures I have tracked down the few steps different from a centos 8 stream install.  Hopefully this will save someone a lot of days (and days and days, weeks?) of troubleshooting.

The key differences are:

1. In centos 8 stream you only needed to change the containerd from disabling containerd.  In centos 9 stream you need to copy the whole default configuration and change it to use systemd cgroup.  This script is currently working for me:

# make a copy of the default containerd configuration
containerd config default | sudo tee /etc/containerd/config.toml
# set to use systemd
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# adjust pause image to what's actually installed
PAUSE_IMAGE=$(kubeadm config images list | grep pause)
sudo -E sed -i "s,sandbox_image = .*,sandbox_image = \"$PAUSE_IMAGE\",g" /etc/containerd/config.toml

# restart the containerd service
sudo systemctl enable containerd
sudo systemctl restart container

2. There is something odd happening when performing the ‘kubeadm init’ which I was able to get around by doing the following:

# avoid a couple phases when performing kubeadmin init
sudo kubeadm init --control-plane-endpoint="<put_endpoint_here>:6443" --upload-certs --pod-network-cidr=<put_cni_cidr_here> \
--skip-phases=addon/kube-proxy \
--skip-phases=addon/coredns

# wait about 40 seconds then run the following to run the previously skipped phases
sudo kubeadm init phase addon all \
--control-plane-endpoint="<put_endpoint_here>:6443" \
--pod-network-cidr=<put_cni_cidr_here>

If I get a chance I’ll put together a video for this since there doesn’t seem to be one out there in the wild yet.

tether laptop via android phone in such a way as to give the laptop access to the phone’s vpn

Android tether how to:

  1. Setup wireguard on your phone to your network, this allows you to access your hosted webapps, and your laptop will also get to have access
  2. On phone install sshd such as simplesshd, by default it uses port 2222
  3. On laptop install adb, plug phone into usb, then run “adb forward tcp:2222 tcp:2222”, this will forward localhost:2222 on your laptop to the sshd running on the attached phone
  4. On laptop connect to ssh server using putty @ localhost on port 2222, also setup a port forwarding tunnel such as ‘Source port = 9999’, select ‘Dynamic’, nothing for destination
  5. This sets up a socks proxy, now on the laptop run chrome pointing it at the socks proxy c:\…\chrome.exe –proxy-server=”socks5://localhost:9999″ (easiest to just edit the shortcut).  ** before launching chrome in this way you have to close all chrome instances, otherwise it will appear not to work

There you go, now on your laptop you’ll be able to reach your home webapps as it will proxy through the phone which is running the vpn.  Note: you’ll also have to enable developer access in order to use adb.

This is perhaps not as secure as just using your phone’s built-in tether options, mine seems to put tethered connections behind a NAT which is a good idea, but in my case I wanted my laptop to use the vpn the phone was using.  Good luck!

Additional, openvpn:

Instead of configuring chrome to use the socks5 proxy, you can setup openvpn to use the socks5 proxy, then all of your networking will work, not just chrome. Just add the following to your openvpn client config (you’ll also need to setup an openvpn server of course, the above setup does not require it to be open to the internet, we access it via the ssh tunnel):

proto tcp
socks-proxy localhost 9999
connect-retry-max 1

remote <openvpn_ip> <openvpn_port> tcp

Actually, maybe the socks5 proxy isn’t needed at all if using openvpn, it just needs its port forwarded, no?

k8s-at-home, another project deprecated while at its prime

The open source community it capable of incredible things all working their main jobs then building things outside of work … but, it is all too common for people to also want to have a life outside of work.

We’ve lost another project just due to a lack of maintainers / availability.  It’s sad when it happens.  Even I didn’t have time to help out, and now it’s gone.

If only there were a way to pay people to maintain open source projects.

Think I’ll have a drink tonight in celebration of the k8s-at-home folks, thanks for everything you did.

Making learning fun by creating a “gray area” niche goal

One time, as a way to motivate a coworker who was working to learn scripting, I shared with him an algorithm to generate all possible string combinations given a string of possible characters.  This, of course could be used potentially to try all possible passwords for nefarious means.

Within about 10 minutes I had a manager in the office, looking at the whiteboard, telling me they weren’t stupid, that they knew what all that was on the board and that I need to not be teaching the employees how to hack computers.  I noticed a coworker sneaking out of the room trying to hide their laughter.  I think we know who called the manager.

It still makes me laugh to this day.  I suppose they weren’t completely wrong, but in my mind I figure if you tell someone to make every possible key possible for a type of car, and try them all, at some point you’ll get into that car.  Is that teaching a master class on how to break into cars?  I guess, but it sure isn’t efficient, master class it is not.  However, as a fun script for someone new to scripting to use to learn with, it’s a fun algorithm to write.

Along those same “gray area” lines I share with you a goal to help motivate one to learn kubernetes.  I’ll give you the exact steps necessary to make it happen.  The goal is this: “Let’s put together a micro-services architected solution to help you keep track of movies or tv shows you’d like to get around to watching some day”.  I used to use a physical notepad for this back in the day, then a notepad app on my phone, but modern days allow for modern solutions.  No longer do you have to type the full name of a movie or tv show, you can now type a little and search for it, then when its found click on it to add it to your queue.

Here’s how to do it and the applications you’ll need (this is for an onprem setup, you are own your own if you are deploying apps the world is able to see from the internet, try not to do that):

  1. If you don’t know kubernetes sign up for the udemy class “Certified Kubernetes Administrator (CKA) with Practice Tests”, then go ahead and take the exam and get the CKA certification.
  2. Decide which extra computer you have laying around is to be your NAS, where your network storage will live, and install truenas core on it.  Then configure iscsi to work with kubernetes provisioned storage.  You’ll need to deploy democratic-csi into your cluster and install scsi-related utilities on to your worker nodes in the next step.
  3. Spin up a cluster using kubeadm then deploy Plex Media Server into your cluster.  Plex is used to give a netflix-like experience around viewing your home media collection.  I recommend the kube-plex helm chart and to deploy using gitops using argocd.  kube-plex will deploy a pms instance that will spin up extra streaming processes on the fly, but I recommend disabling that feature in the values.yaml, to avoid any potential file locking issues… at least in the beginning.  Also, instead of using a network share for your configuration setup taints and tolerances to direct the plex application to only spin up on one node, and use local storage there.  (Plex tends to have database corruption when using network storage, go for it later if you want, but set yourself up for success initially.)
  4. Next deploy sonarr and radarr using the helm charts out at k8s-at-home, these are similar apps that work with tv and movies respectively.  k8s-at-home is a cool group in that they have a common library among all their helm charts, so for example, if you wanted to setup a vpn on any particular application as a side pod you could, or using a single vpn pod that multiple pods can route through, or add an ingress, or mount an additional volume, etc.
  5. Now, you are done, you can browse to your sonarr and radarr installation and search for movies and tv shows you’d like to watch someday.  Be sure to setup cert-manager and external-dns as well to register them in your local onprem dns and configure a valid certificate.
  6. You might also want to setup a vpn such as wireguard in your cluster as well as forwarding the needed vpn-related port through your router so that you can browse to these apps on your phone while you are out and about, that way, if you hear of a movie you want to queue up for later viewing you can do so in the moment.
  7. Interestingly, you can also click on Connect in both sonarr and radarr to configure your plex server for some reason.

mapping dns via argocd applicationset and external-dns

When using ArgoCD and an ApplicationSet to deploy external-dns to all clusters, as part of a grouping of addons common to all clusters, it can be useful to configure the DNS filter using variables:

        helm:
          releaseName: "external-dns"
          parameters:
          - name: external-dns.domainFilters
            value: "{ {{name}}.k.home.net }"
          - name: external-dns.txtOwnerId
            value: '{{name}}'
          - name: external-dns.rfc2136.zone
            value: '{{name}}.k.home.net'

This will place the cluster name as part of the dns name used by external-dns, resulting in the following type of FQDNs used by clusters:

app.dev.k.home.net
app.test.k.home.net
app.prod.k.home.net

Though, for my core cluster with components used by all clusters, I like to leave out the cluster name so all core components are at the k.<domain> level:

argocd.k.home.net
harbor.k.home.net
keycloak.k.home.net

git-based homedir folder structure (and git repos) using lessons learned

After reinstalling everything including my main linux workbench system it became the right time to finally get my home directory into git.  Taking all lessons learned up till this point it seemed a good idea to cleanup my git repo strategy as well.  The revised strategy:

[Git repos]

Personal:
- workbench-<user>

Team (i for infrastructure):
- i-ansible
- i-jenkins (needed ?)
- i-kubernetes (needed?)
- i-terraform
- i-tanzu

Project related: (source code)
- p-lido (use tagging dev/test/prod)
    doc
    src

Jenkins project pipelines:
- j-lifecycle-cluster-decommission
- j-lifecycle-cluster-deploy
- j-lifecycle-cluster-update
- j-lido-dev
- j-lido-test
- j-lido-prod

Cluster app deployments:
- k-core
- k-dev
- k-exp
- k-prod

[Folder structure]

i-ansible (git repo)
  doc
  bin
  plays ( ~/a )

i-jenkins (git repo) (needed ?)
  doc
  bin
  pipelines ( ~/j )

i-kubernetes (git repo) (needed ?)
  doc
  bin
  manage ( ~/k )
  templates

i-terraform (git repo)
  doc
  bin
  plans (~/p)
    k-dev

i-tanzu (git repo)
  doc
  bin
  application.yaml (-> appofapps)
  apps (~/t)
    appofapps/ (inc all clusters)
    k-dev/cluster.yaml

src
  <gitrepo>/<user> (~/mysrc) (these are each git repos)
  <gitrepo>/<team> (~/s) (these are each git repos)
    j-lifecycle-cluster-decommission
    j-lifecycle-cluster-deploy
    - deploy cluster
    - create git repo
    - create adgroups
    - register with argocd global
    j-lifecycle-cluster-update
    j-lido-dev
    j-lido-test
    j-lido-prod
    k-dev
      application.yaml (-> appofapps)
      apps
        appofapps/ (inc all apps)
    k-exp
      application.yaml (-> appofapps)
      apps
        appofapps/ (inc all apps)
    k-prod
      application.yaml (-> appofapps)
      apps
        appofapps/ (inc all apps)

workbench-<user> (git repo)
  doc
  bin