questions related to moving to kubernetes

Do you have three pipelines for each app? dev/test/prod?
– or just one with a manual step at the end do deploy into production?
– or do you deploy into a binary repository such as artifactory (and later a manual step into production)?
– jenkins / azure devops / tfs / other?

Do you regenerate certificates with each deployment?
– If not, how do you update certificates?
– Can you update all certs in mass if you wanted to?
(cert manager)

The guy who left was doing ansible, but ansible is usually associated with maintaining software on VMs.
– Was he actually using ansible to mostly work with docker / kubernetes? (e.g. community.kubernetes.k8s)

How are you ensuring containers can be restarted with a new image and they’ll keep working as if nothing happened?
– volumes via network shares that a networking team handles keeping backed up for you?
– databases hosted by a database team which they handle keeping backed up for you?
(nfs provisioning or something similar via a provisioning deployment providing a storageClass, sometimes database provided by db team)

How do you handle dev/test/prod in your ansible environment? With your pipline scripts?

The strategy I’m using to move to all kubernetes is:

– build a vm, install docker
– install app via docker or docker-compose, get it working
– generate certificates & get app working with certificates
– consider how volumes / database can be used to keep things working when restarting a container with an updated image
– (old style, automate with ansible, new style … document steps via README)

– build a vm, install minikube
– get app working via minikube
– get app working with certificates generated via previous step
– (document steps via README)

– get working via kubernetes & use a pipeline that releases into production after unit tests pass
(no need for middle steps, right to kubernetes & generally helm files already exist)

I figure in the future with practice things would just start at kubernetes, but before getting too far into docker/minikube thought it wise to check with someone with an established environment.

So far I have implemented pihole, keycloak, and taiga (using a customized plugin that interacts with keycloak via oidc). Taiga will require a customizing of the docker file in the pipeline. Looking to avoid reinventing the wheel by building upon your strategy. Do you tend to pull the latest stable source of an app and modify their docker file with your customizations in your pipeline, or do you keep and maintain your customized separate docker file?
(helm files generally already exist)