I’ve debugged within a docker container in the past, and so I expected integration with kubernetes, I was not disappointed.
https://docs.microsoft.com/en-us/visualstudio/bridge/bridge-to-kubernetes-vs
I’ve debugged within a docker container in the past, and so I expected integration with kubernetes, I was not disappointed.
https://docs.microsoft.com/en-us/visualstudio/bridge/bridge-to-kubernetes-vs
He who understands it, earns it; he who doesn’t, pays it.
brokermanager
brokermanager-db
broker-tdameritrade
broker-tdameritrade-db
broker-tdameritrade-mq
broker-tdameritrade-www (oidc login)
broker-kraken
broker-kraken-db
broker-kraken-mq
broker-kraken-www (oidc login)
tradermanager
tradermanager-db
tradermanager-mq
tradermanager-www (gui)
trader-<algo>
trader-<algo>-mq
After reinstalling everything including my main linux workbench system it became the right time to finally get my home directory into git. Taking all lessons learned up till this point it seemed a good idea to cleanup my git repo strategy as well. The revised strategy:
[Git repos]
Personal:
- workbench-<user>
Team (i for infrastructure):
- i-ansible
- i-jenkins (needed ?)
- i-kubernetes (needed?)
- i-terraform
- i-tanzu
Project related: (source code)
- p-lido (use tagging dev/test/prod)
doc
src
Jenkins project pipelines:
- j-lifecycle-cluster-decommission
- j-lifecycle-cluster-deploy
- j-lifecycle-cluster-update
- j-lido-dev
- j-lido-test
- j-lido-prod
Cluster app deployments:
- k-core
- k-dev
- k-exp
- k-prod
[Folder structure]
i-ansible (git repo)
doc
bin
plays ( ~/a )
i-jenkins (git repo) (needed ?)
doc
bin
pipelines ( ~/j )
i-kubernetes (git repo) (needed ?)
doc
bin
manage ( ~/k )
templates
i-terraform (git repo)
doc
bin
plans (~/p)
k-dev
i-tanzu (git repo)
doc
bin
application.yaml (-> appofapps)
apps (~/t)
appofapps/ (inc all clusters)
k-dev/cluster.yaml
src
<gitrepo>/<user> (~/mysrc) (these are each git repos)
<gitrepo>/<team> (~/s) (these are each git repos)
j-lifecycle-cluster-decommission
j-lifecycle-cluster-deploy
- deploy cluster
- create git repo
- create adgroups
- register with argocd global
j-lifecycle-cluster-update
j-lido-dev
j-lido-test
j-lido-prod
k-dev
application.yaml (-> appofapps)
apps
appofapps/ (inc all apps)
k-exp
application.yaml (-> appofapps)
apps
appofapps/ (inc all apps)
k-prod
application.yaml (-> appofapps)
apps
appofapps/ (inc all apps)
workbench-<user> (git repo)
doc
bin
Deploying via git using argocd or flux makes disaster recovery fairly straightforward.
Using gitops means you can delete a kubernetes cluster, spin up a new one, and have everything deployed back out in minutes. But what about recovering the pvcs used before?
If you are using an infrastructure which implements csi, then you are able to allocate pvcs using storage managed outside of the cluster. And, it turns out, reattaching to those pvcs is possible but you have to plan ahead.
Instead of writing yaml to spin up a pvc automatically, create the pv and pvc using manually set values. Or spin up the pvcs automatically and then go back and modify the yaml to set recoverable values. The howto is right up top in the csi documentation: https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/
Similarly, it is common for applications to spin up with randomly set admin passwords and such. However, imagine a recovery scenario where a new cluster is stood up, you don’t want a new password stood up. Use a vault with a password and reference the vault.
These two steps do add a little work, it’s the idea of taking a little more time to do things right, and in a production environment you want this.
Infrastructure side solution: https://velero.io/
Todo: Create a video deleting a cluster and recovering all apps with a new cluster, recovering pvcs also (without any extra work on the recovery side).
OIDC is always preferred if possible. At this time in history not all projects have OIDC support, though some can be extended via an extension or plugin to accomplish the goal. I’ve got enough experience to help projects get over this hurdle and get OIDC working. If I could be paid just to help out open source projects I might go for it.
Here’s a pull request for a taiga helm chart I’ve been using. I’ve been using taiga for years via docker and am happy to be able to help out in this way now that I’m using kubernetes and helm charts. In this case a borrowed a technique from a nextcloud helm chart and works perfectly for this taiga helm chart: https://github.com/nemonik/taiga-helm/pull/6
Traditionally Shinto shrines are rebuilt exactly the same next to the old shrine every so many years. The old shrine is removed and when the time comes it will be rebuilt again.
Something similar can apply to home environments. Recently I nuked everything and rebuilt from the ground up. Something I’ve always done after 6 months or a year, for security reasons and to ensure I am always getting the fastest performance from infrastructure.
Such reinstalling is a natural fit for kubernetes. There are several methods for spinning up a cluster, and after that just by the nature of kubernetes being yaml files it is easy to spin up the services you had running before, and watch them self register in the new dns and self generate certificates with the new active directory certificate authority. Amazing. Kubernetes is truly, a work of art.