tether laptop via android phone in such a way as to give the laptop access to the phone’s vpn

Android tether how to:

  1. Setup wireguard on your phone to your network, this allows you to access your hosted webapps, and your laptop will also get to have access
  2. On phone install sshd such as simplesshd, by default it uses port 2222
  3. On laptop install adb, plug phone into usb, then run “adb forward tcp:2222 tcp:2222”, this will forward localhost:2222 on your laptop to the sshd running on the attached phone
  4. On laptop connect to ssh server using putty @ localhost on port 2222, also setup a port forwarding tunnel such as ‘Source port = 9999’, select ‘Dynamic’, nothing for destination
  5. This sets up a socks proxy, now on the laptop run chrome pointing it at the socks proxy c:\…\chrome.exe –proxy-server=”socks5://localhost:9999″ (easiest to just edit the shortcut).  ** before launching chrome in this way you have to close all chrome instances, otherwise it will appear not to work

There you go, now on your laptop you’ll be able to reach your home webapps as it will proxy through the phone which is running the vpn.  Note: you’ll also have to enable developer access in order to use adb.

This is perhaps not as secure as just using your phone’s built-in tether options, mine seems to put tethered connections behind a NAT which is a good idea, but in my case I wanted my laptop to use the vpn the phone was using.  Good luck!

Additional, openvpn:

Instead of configuring chrome to use the socks5 proxy, you can setup openvpn to use the socks5 proxy, then all of your networking will work, not just chrome. Just add the following to your openvpn client config (you’ll also need to setup an openvpn server of course, the above setup does not require it to be open to the internet, we access it via the ssh tunnel):

proto tcp
socks-proxy localhost 9999
connect-retry-max 1

remote <openvpn_ip> <openvpn_port> tcp

Actually, maybe the socks5 proxy isn’t needed at all if using openvpn, it just needs its port forwarded, no?

k8s-at-home, another project deprecated while at its prime

The open source community it capable of incredible things all working their main jobs then building things outside of work … but, it is all too common for people to also want to have a life outside of work.

We’ve lost another project just due to a lack of maintainers / availability.  It’s sad when it happens.  Even I didn’t have time to help out, and now it’s gone.

If only there were a way to pay people to maintain open source projects.

Think I’ll have a drink tonight in celebration of the k8s-at-home folks, thanks for everything you did.

Rearchitecting lido for kubernetes

Deployments

brokermanager
brokermanager-db
broker-tdameritrade
broker-tdameritrade-db
broker-tdameritrade-mq
broker-tdameritrade-www (oidc login)
broker-kraken
broker-kraken-db
broker-kraken-mq
broker-kraken-www (oidc login)
tradermanager
tradermanager-db
tradermanager-mq
tradermanager-www (gui)
trader-<algo>
trader-<algo>-mq

git-based homedir folder structure (and git repos) using lessons learned

After reinstalling everything including my main linux workbench system it became the right time to finally get my home directory into git.  Taking all lessons learned up till this point it seemed a good idea to cleanup my git repo strategy as well.  The revised strategy:

[Git repos]

Personal:
- workbench-<user>

Team (i for infrastructure):
- i-ansible
- i-jenkins (needed ?)
- i-kubernetes (needed?)
- i-terraform
- i-tanzu

Project related: (source code)
- p-lido (use tagging dev/test/prod)
    doc
    src

Jenkins project pipelines:
- j-lifecycle-cluster-decommission
- j-lifecycle-cluster-deploy
- j-lifecycle-cluster-update
- j-lido-dev
- j-lido-test
- j-lido-prod

Cluster app deployments:
- k-core
- k-dev
- k-exp
- k-prod

[Folder structure]

i-ansible (git repo)
  doc
  bin
  plays ( ~/a )

i-jenkins (git repo) (needed ?)
  doc
  bin
  pipelines ( ~/j )

i-kubernetes (git repo) (needed ?)
  doc
  bin
  manage ( ~/k )
  templates

i-terraform (git repo)
  doc
  bin
  plans (~/p)
    k-dev

i-tanzu (git repo)
  doc
  bin
  application.yaml (-> appofapps)
  apps (~/t)
    appofapps/ (inc all clusters)
    k-dev/cluster.yaml

src
  <gitrepo>/<user> (~/mysrc) (these are each git repos)
  <gitrepo>/<team> (~/s) (these are each git repos)
    j-lifecycle-cluster-decommission
    j-lifecycle-cluster-deploy
    - deploy cluster
    - create git repo
    - create adgroups
    - register with argocd global
    j-lifecycle-cluster-update
    j-lido-dev
    j-lido-test
    j-lido-prod
    k-dev
      application.yaml (-> appofapps)
      apps
        appofapps/ (inc all apps)
    k-exp
      application.yaml (-> appofapps)
      apps
        appofapps/ (inc all apps)
    k-prod
      application.yaml (-> appofapps)
      apps
        appofapps/ (inc all apps)

workbench-<user> (git repo)
  doc
  bin

kubernetes disaster recovery

Deploying via git using argocd or flux makes disaster recovery fairly straightforward.

Using gitops means you can delete a kubernetes cluster, spin up a new one,  and have everything deployed back out in minutes.  But what about recovering the pvcs used before?

If you are using an infrastructure which implements csi, then you are able to allocate pvcs using storage managed outside of the cluster.  And, it turns out, reattaching to those pvcs is possible but you have to plan ahead.

Instead of writing yaml to spin up a pvc automatically, create the pv and pvc using manually set values.  Or spin up the pvcs automatically and then go back and modify the yaml to set recoverable values.  The howto is right up top in the csi documentation: https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/

Similarly, it is common for applications to spin up with randomly set admin passwords and such.  However, imagine a recovery scenario where a new cluster is stood up, you don’t want a new password stood up.  Use a vault with a password and reference the vault.

These two steps do add a little work, it’s the idea of taking a little more time to do things right, and in a production environment you want this.

Infrastructure side solution: https://velero.io/

Todo:  Create a video deleting a cluster and recovering all apps with a new cluster, recovering pvcs also (without any extra work on the recovery side).