k8s-at-home, another project deprecated while at its prime

The open source community it capable of incredible things all working their main jobs then building things outside of work … but, it is all too common for people to also want to have a life outside of work.

We’ve lost another project just due to a lack of maintainers / availability.  It’s sad when it happens.  Even I didn’t have time to help out, and now it’s gone.

If only there were a way to pay people to maintain open source projects.

Think I’ll have a drink tonight in celebration of the k8s-at-home folks, thanks for everything you did.

Rearchitecting lido for kubernetes

Deployments

brokermanager
brokermanager-db
broker-tdameritrade
broker-tdameritrade-db
broker-tdameritrade-mq
broker-tdameritrade-www (oidc login)
broker-kraken
broker-kraken-db
broker-kraken-mq
broker-kraken-www (oidc login)
tradermanager
tradermanager-db
tradermanager-mq
tradermanager-www (gui)
trader-<algo>
trader-<algo>-mq

git-based homedir folder structure (and git repos) using lessons learned

After reinstalling everything including my main linux workbench system it became the right time to finally get my home directory into git.  Taking all lessons learned up till this point it seemed a good idea to cleanup my git repo strategy as well.  The revised strategy:

[Git repos]

Personal:
- workbench-<user>

Team (i for infrastructure):
- i-ansible
- i-jenkins (needed ?)
- i-kubernetes (needed?)
- i-terraform
- i-tanzu

Project related: (source code)
- p-lido (use tagging dev/test/prod)
    doc
    src

Jenkins project pipelines:
- j-lifecycle-cluster-decommission
- j-lifecycle-cluster-deploy
- j-lifecycle-cluster-update
- j-lido-dev
- j-lido-test
- j-lido-prod

Cluster app deployments:
- k-core
- k-dev
- k-exp
- k-prod

[Folder structure]

i-ansible (git repo)
  doc
  bin
  plays ( ~/a )

i-jenkins (git repo) (needed ?)
  doc
  bin
  pipelines ( ~/j )

i-kubernetes (git repo) (needed ?)
  doc
  bin
  manage ( ~/k )
  templates

i-terraform (git repo)
  doc
  bin
  plans (~/p)
    k-dev

i-tanzu (git repo)
  doc
  bin
  application.yaml (-> appofapps)
  apps (~/t)
    appofapps/ (inc all clusters)
    k-dev/cluster.yaml

src
  <gitrepo>/<user> (~/mysrc) (these are each git repos)
  <gitrepo>/<team> (~/s) (these are each git repos)
    j-lifecycle-cluster-decommission
    j-lifecycle-cluster-deploy
    - deploy cluster
    - create git repo
    - create adgroups
    - register with argocd global
    j-lifecycle-cluster-update
    j-lido-dev
    j-lido-test
    j-lido-prod
    k-dev
      application.yaml (-> appofapps)
      apps
        appofapps/ (inc all apps)
    k-exp
      application.yaml (-> appofapps)
      apps
        appofapps/ (inc all apps)
    k-prod
      application.yaml (-> appofapps)
      apps
        appofapps/ (inc all apps)

workbench-<user> (git repo)
  doc
  bin

kubernetes disaster recovery

Deploying via git using argocd or flux makes disaster recovery fairly straightforward.

Using gitops means you can delete a kubernetes cluster, spin up a new one,  and have everything deployed back out in minutes.  But what about recovering the pvcs used before?

If you are using an infrastructure which implements csi, then you are able to allocate pvcs using storage managed outside of the cluster.  And, it turns out, reattaching to those pvcs is possible but you have to plan ahead.

Instead of writing yaml to spin up a pvc automatically, create the pv and pvc using manually set values.  Or spin up the pvcs automatically and then go back and modify the yaml to set recoverable values.  The howto is right up top in the csi documentation: https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/

Similarly, it is common for applications to spin up with randomly set admin passwords and such.  However, imagine a recovery scenario where a new cluster is stood up, you don’t want a new password stood up.  Use a vault with a password and reference the vault.

These two steps do add a little work, it’s the idea of taking a little more time to do things right, and in a production environment you want this.

Infrastructure side solution: https://velero.io/

Todo:  Create a video deleting a cluster and recovering all apps with a new cluster, recovering pvcs also (without any extra work on the recovery side).

and then we have argocd, with its ability to restore an entire cluster, even multiple clusters simultaneously simply due to its inherent declarative nature

more kubernetes magic