(idea) ssh menu and ssh menu webapi

Single linux system:
When you are using linux and need to ssh, instead of typing ssh you type ‘s’, which displays a menu of past ssh connection you can connect to, sorted by name or by most recently used, ability to adjust sort, edit, delete, etc.

Multiple linux systems:
I’m not the first to have this idea, there are a few sshmenu programs out there. But what if we were to create a webapi (and host it via kubernetes of course), with OIDC enabled, then on any linux system when you type ‘s’ it will ask for the sshmenu server, or use a default server from a config file, and pop up a web browser for you to login, obtain an apikey, and use that apikey from that point on to display your history off ssh connection on all your linux systems (keeping the sshmenu in sync on all of them). Could have a mobile app and web browser app as well.

Enterprise ready:
Using groups claims it would be possible to have admin users and regular users of course, but also a user could be added to a group and then they’ll also be able to see all of the group’s shared ssh connections, so a team could share out all their common ssh connections.

(gitops) argocd phoenix configuration: clusterapi with vcluster provider

Standardized git repo layouts helps to keep deployments consistent and clean:

k-argocd
- /appofapps/clusters/application.yaml
- /apps
  - /argocd-seed/update.sh
  - /argocd/applicationset.yaml
  - /clusterapi/applicationset.yaml
  - /daytwo/applicationset.yaml
- /projects
  - /addons.yaml
  - /developer.yaml
  - /devsecops.yaml

k-argocd-addons
- /apps
  - /adcs-issuer-system/applicationset.yaml
  - /adcs-issuer-system/base/Chart.yaml
  - /cert-manager/applicationset.yaml
  - /external-dns/applicationset.yaml
  - /external-dns-root/applicationset.yaml
  - /fluent-bit/applicationset.yaml
  - /kasten/applicationset.yaml
  - /nginx-ingress/applicationset.yaml
  - /metrics-server/applicationset.yaml
  - /pinniped-concierge/applicationset.yaml
  - /prometheus/applicationset.yaml

k-argocd-clusters
- /clusters
  - /vc-non.yaml
  - /vc-prod.yaml

k-vc-non
- /appofapps
  - /namespaces/application.yaml
- /apps
  - /example/applicationset.yaml
  - /example/base/Chart.yaml
- /namespaces
  - /example/namespace.yaml
  - /example/resourcequota.yaml
  - /example/servicemesh.yaml

k-vc-prod
- /appofapps
  - /namespaces/application.yaml
- /apps
  - /example/applicationset.yaml
  - /example/base/Chart.yaml
- /namespaces
  - /example/namespace.yaml
  - /example/resourcequota.yaml
  - /example/servicemesh.yaml

daytwo automates several steps needed when first deploying clusters:

  • register cluster with argocd, also adds annotation allowing applications to target by cluster name
  • copy labels from cluster yaml to argocd secret, useful for deploying addons
  • generates pinniped kubeconfig, allows for initial access without needing admin kubeconfig
  • registers as a kasten secondary cluster, (if kasten is being used)

Scripts / pipelines are needed to:

  • provision / decommission a cluster
    • adjust cluster resources
  • add / remove a namespace
    • adjust namespace resource quota
    • grant developers access to namespaces

(idea) restarting a pod via webapi

At the end of a pipeline it can be nice to restart a pod.  In production this occurs via gitops automatically, but in dev it can be nice to not have to wait for git to sync.  Though there are multiple ways to do this common ways are:

  1. just put the cluster kubeconfig in a secret in the pipeline and use that to restart the pod (a little too powerful)
  2. create a service account, acquire its token and place in pipeline (safest, but takes some work for each app)

What about a solution similar to reloader?  Reloader watches for changes in configmaps and secrets which have a certain annotation and if a they change reloader restarts things.  We could just use reloader & make a change to a configmap in order to trigger the reload.

However, what about creating a controller which listens for a webapi call asking for a deployment to be restarted.  Then a pipeline could call the appropriate url to get things restarted.  By deploying via argocd using an applicationset and using a url convention based on the clustername then all development clusters could be enabled to used this method in their pipelines, consumers would only need to annotate their deployments/statefulset/etc …

(bash) way to implement an idle processing loop

Bash is what it is, often a quick solution to get something done, once things start to become too advance probably should be writing in a higher level language.

With that said, here’s using tcp via bash to implement an idle processing loop:

#!/bin/bash

# Start tail command in a separate process and track PID so we can take action later
# (send output to this shell's stdout)
tail /var/logs/* > /proc/$$/fd/1 &
PID_TAIL=$!

# Idle processing loop
while (true); do

  # Perform work while 
  if <test condition>; then
  
    # Stop the tail process
    disown $PID_TAIL;
    kill $PID_TAIL;

    # Exit idle processing loop
    break;
  fi

  # avoiding maxing cpu
  sleep 1

done

The trick here is ‘$$’ is the current shell’s PID, the /proc/$$/fd/1 is a tcp device which represents the shell’s stdout. By sending the tail command to this device as a background process with ‘&’ we still see the tail output on our console. Yet, the bash script is actually in a loop able to do whatever it wants while the tail is running. When the script sees some condition it cares about it can stop the tail process and exit, or just keep working until someone presses Ctrl-c.

For more information see: https://www.xmodulo.com/tcp-udp-socket-bash-shell.html

flutter webapp, securely calling a backend

Just thinking outloud,

Since a flutter webapp is all running in the client browser, it is not possible to access a backend which requires credentials in some commonly used methods safely.

  • Loading credentials via environment variables, in the way containers commonly do, isn’t safe because the .env file containing the environment variables can be browsed directly. https://github.com/java-james/flutter_dotenv/issues/74
  • Even if you are able to somehow get the credentials into the app, if they are credentials you don’t want the user to know, they can be exposed via dev tools … as everything is living in the client browser.

So how to connect to a web service backend from flutter?

You have to use an in-between backend, here are some options:

  • Implement a webapi which has methods created just for the flutter app
  • Implement a webapi with the intention of just passing the request along the backend and adding a header with the needed token, but also checking the request to be sure its only the type of request we want to allow.
  • Create an ingress passthrough which adds an appropriate token header and then calls the backend, careful though, does the token give the user too much access?

Note, this in-between webapi must be reachable from the client web browser, so it most likely must be protected, OIDC is a good option. Using the same OIDC parameters on both the flutter webapp and the in-between webapi will let an OIDC token gathered up via the webapp be passed along to the webapi without an additional login.

daytwo is almost ready for beta testing (argocd-daytwo)

Ability to use FQDN to access workload clusters, rather than ip address

Doing my part to try and help gitops take over the world.

By targeting applications to install on a kubernetes cluster by fqdn, rather than ip address, it becomes possible to delete a cluster and stand up a new cluster, and watch as all the applications reinstall automatically to the new cluster via gitops using something like argocd.

Feature request submitted (vcenter clusterapi provider):
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/2098

If implemented, this would enable all consumers of vmware vcenter / tanzu to better implement gitops.

Tempting to drop everything and be the one to implement the code and submit a pull request, just to get the credit.  Would feel good to know you’ve helped so many people.

Creating webapps w/ flutter & deploying to kubernetes

In the past I’ve written guis in many languages, but found my passion more in backend server apis.  Along this line REST, swagger, OIDC, and websockets have been a dream come true.

Then one day I discovered flutter, created by google, it is cross platform compiling natively on both Android and iPhone.

Flutter changed everything for me, it made me love creating guis, it felt like the first real solution to creating a gui with everything else prior being trail blazer projects helping to figure out everything needed to one day lead to the creation of flutter.

Turns out it also can be used to create windows & linux apps as well, with some exceptions.  On a mobile device you get a free database with your application, so if you deploy on windows / linux you have to solve the database on your own.

Today I noticed that flutter can also be used to create a webapp.  I’m thinking to create a webapp, place it inside a container and deploy it to kubernetes along with a database into the same namespace, making for a similar situation to how mobile apps get a database.  So much potential, this is exciting.

k8s daytwo operation ideas

controllers:

  • watch for cluster.yaml updates & reflect addons annotations from cluster.yaml to argocd cluster yaml secrets
  • watch argocd repo certs and check for expire, re-adding automatically
  • tanzu watch for service associated with cluster to appear & annotate with fqdn automatically (in order to add dns entry for each cluster kubeapi)
  • watch for certificate authority expiration and update ‘ca-bundle’ stored in vault

Idea: repo-manager

Why

In a world of containers, developers need to have multiple linux repos mirrored on prem for use when building or modifying images.

What

Similar to how cert-manager works with providers to extend functionality, repo-manager can be extended to mirror additional linux distros.

How

repo-manager provides an operator-type controller which watches for the CRD type ‘mirror.aarr.xyz’ & manages for each repo mirror:

  • the deployment of a pod for mirroring
  • a pvc for each pod
  • the increasing of the pvc size as needed
  • ingress configuration to reach each repo using a subpath http(s)://mirror.<fqdn>/<path>
  • a status using kubectl via displaying relevant CRDs
  • mapping of a ca-bundle
  • repo-specific settings

Also

  • an overall web interface
  • settings which apply to all repo mirrors