gge ingresses, a view into the world of kubernetes

(gge) This is so fun, taking advantage of kubernetes cronjobs …

The generic game engine will host tournaments.

I was planning on daily and weekly tournaments that everyone can participate in, but also allow folks to create their own.  Figured I might be able to take care of scheduling the tournaments using something in kubernetes and I was not disappointed!

The workflow goes like this:

  • create a tournament instance
  • create a cronjob with the tournament start time
  • when the cronjob starts, it creates a pod that runs curl and accesses an internal-only tournament api to start the tournament
  • tournaments will only run once by default but by setting a value will repeat on the cronjob schedule

Just have to make sure to delete the cronjob along with the tournament.  If someone were to delete the tournament using kubectl there is a chance of an orphaned cronjob (oh no) if the tournament controller isn’t running to catch it, otherwise the tournament controller will receive a DELETE event and go ahead and delete the cronjob.  Will have to run a cleanup process now and then to look for orphaned cronjobs, wonder if there might be a convenient way to do that???

Generic Boardgame Game Engine is coming along, its playable …

Putting this together was so much fun, and surprisingly quick to implement.
Still got a few more features to add, let’s just call this the teaser trailer …

$ k get all
NAME                                                               READY   STATUS    RESTARTS   AGE
pod/gge-controller-tournament-single-elimination-db6b6f474-fxg2c   1/1     Running   0          3d2h
pod/gge-game-77767987b5-z6kt8                                      1/1     Running   0          2d9h
pod/gge-tournament-6557567558-k4hfl                                1/1     Running   0          2d9h
pod/gge-gateway-c49fb8549-6wwc6                                    1/1     Running   0          2d9h
pod/gge-game-template-79c4bf84fc-54rng                             1/1     Running   0          2d6h
pod/gge-tournament-template-854f5c4475-txvmb                       1/1     Running   0          2d6h
pod/gge-controller-game-tic-tac-toe-576cd5d665-b4fzc               1/1     Running   0          2d3h
pod/gge-auth-5b4b8bb885-lp7j7                                      1/1     Running   0          2d2h

NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/gge-auth                  ClusterIP     <none>        80/TCP    4d1h
service/gge-gateway               ClusterIP     <none>        80/TCP    4d1h
service/gge-game                  ClusterIP   <none>        80/TCP    4d1h
service/gge-tournament            ClusterIP    <none>        80/TCP    2d12h
service/gge-tournament-template   ClusterIP   <none>        80/TCP    2d12h
service/gge-game-template         ClusterIP   <none>        80/TCP    2d11h

NAME                                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/gge-controller-tournament-single-elimination   1/1     1            1           3d5h
deployment.apps/gge-game                                       1/1     1            1           4d1h
deployment.apps/gge-tournament                                 1/1     1            1           2d12h
deployment.apps/gge-gateway                                    1/1     1            1           4d1h
deployment.apps/gge-game-template                              1/1     1            1           2d11h
deployment.apps/gge-tournament-template                        1/1     1            1           2d12h
deployment.apps/gge-controller-game-tic-tac-toe                1/1     1            1           3d4h
deployment.apps/gge-auth                                       1/1     1            1           4d1h

NAME                                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/gge-controller-tournament-single-elimination-db6b6f474   1         1         1       3d5h
replicaset.apps/gge-game-77767987b5                                      1         1         1       4d1h
replicaset.apps/gge-tournament-6557567558                                1         1         1       2d12h
replicaset.apps/gge-gateway-c49fb8549                                    1         1         1       4d1h
replicaset.apps/gge-game-template-79c4bf84fc                             1         1         1       2d11h
replicaset.apps/gge-tournament-template-854f5c4475                       1         1         1       2d12h
replicaset.apps/gge-controller-game-tic-tac-toe-576cd5d665               1         1         1       3d4h
replicaset.apps/gge-auth-5b4b8bb885                                      1         1         1       4d1h

NAME                                         ENABLED   SHORT                LONG                 PATH   true      single-elimination   Single Elimination   /tournament/single-elmination   true      double-elimination   Double Elimination   /tournament/double-elmination          true      round-robin          Round-Robin          /tournament/round-robin

NAME                             ENABLED   SHORT          LONG           PATH   true      connect-four   Connect Four   /game/connect-four           true      2048           2048           /game/2048        true      reversi        Reversi        /game/reversi    true      tic-tac-toe    Tic-Tac-Toe    /game/tic-tac-toe     true      tripletown     Triple Town    /game/tripletown

Redesigning gge (generic game engine), multi-player game lifecycle management, using kubernetes (for ai clients)

Am planning a controller to process custom resource definitions which hold all data.  By using this method a separate standalone database will not be needed, and the solution can scale as much as desired by increasing resources of the cluster itself.  Thousands of games?  Millions of games?  And if there isn’t really a limit, what’s performance going to be like?  These scaling answers must be investigated.

Besides the above, webapis, REST, OIDC, and ingresses mapped to appropriate paths will provide the infrastructure.  Looking forward to experimenting more with scaling and high-availability methods provided by kubernetes.

Sure would be nice to work on open-source 100% of the time, wonder if there might be a way to make that happen …

Initial thoughts …

use cases:
- player 0 is always game controller (can do things with items)
  - 2 players are 1,2
  - 4 players are 1,2,3,4
- 2 player games
- 4 player games
- invite rejected
- invite accepted
- start game once enough players have joined
- send notice:
  - game has started
  - plays which have occured
  - player finished their turn (if not real time strategy)
  - if play was not allowed
maybe not:
- define a unit as unique?

--- ? (a controller for each game to process just that game) (a controller for each game to process just that game)

short: string
long: string
enabled: bool
path: string

[] (example instance)
short: "tic-tac-toe"
long: "Tic-Tac-Toe"
enabled: true
path: "/api/game/tic-tac-toe"

game: string
user: string

[tic-tac-toe_<timestamp>] (example instance)
game: tic-tac-toe_<timestamp>
user: asdf <user being invited>

owner: string
description: tic-tac-toe
is_invite_only: bool
is_public_view: bool
turn_type: string
- <list>
- <list>{user: aaa, status: accepted}
- <list>{user: bbb, status: accepted}
- {id: 0, width: 3, height: 3}
- <list>{id: 1, user: aaa}
- <list>{id: 2, user: bbb}
units: (available units in crd)
- id: 0
  desc: string
  ascii: string of length 1
  imgsrc: <url>string
  - 1 (list of players allowed to use this unit)
  - add (list of actions which may be performed)
  - (optional) list of name/value attributes
- <list>

[tic-tac-toe_<timestamp>] (example instance)
owner: tloyd
is_invite_only: true
is_public_view: true
turn_type: round_robin
- 2
- {user: aaa, status: accepted}
- {user: bbb, status: accepted}
- {id: 1, user: aaa}
- {id: 2, user: bbb}
items: (units in play)
- {id: 0, unit: <id>, grid: 0, y: 1, x: 1, level: 0, player: 0}
- {id: 1, unit: <id>, grid: 0, y: 0, x: 0, level: 0, player: 1}
- id: 0
  player: 1
  - {id: 0, action: add, grid: 0, y: 1, x: 1, level: 0}
- id: 1
  player: 2
  - {id: 0, action: add, grid: 0, y: 0, x: 0, level: 0}

game: string
player: <id>
- <list>{id: 0, action: add, grid: 0, y: 1, x: 1, level: 0, unit: <id>}

[tic-tac-toe_<timestamp>] (example instance)
game: tic-tac-toe_<timestamp>
player: 1
- <list>{id: 0, action: add, grid: 0, y: 1, x: 1, level: 0, unit: <id>}

ingress: cert-manager, letsencrypt, and ->

It seemed for awhile that the popular web browsers would automatically redirect to if didn’t work.  But, after awhile that no longer seemed to happen.  So, let’s do this right.

Here’s an ingress to perform the redirect from to

kind: Ingress
  annotations: 1000m |
      return 301$request_uri;
  name: ingress-redirect
  ingressClassName: nginx
  - host:
  - hosts:

But what about an automatic certificate via letsencrypt? Do we need it? Yes, otherwise displays an invalid certificate before performing the redirect. But, we can’t just add the annotations for cert-manager to this redirect because the call back from lets encrypt will not verify correctly with the redirect. Instead, we need an ingress specifically for handling the letsencrypt callback:

kind: Ingress
  annotations: cluster-letsencrypt-issuer ClusterIssuer 1000m
  name: ingress-redirect-letsencrypt
  ingressClassName: nginx
  - host:
      - backend:
            name: exp-wordpress-xyz-travisloyd-www
              name: http
        path: /.well-known
        pathType: Prefix
  - hosts:

Perfect, now when the certs expire they’ll be renewed automatically via letsencrypt.

(ulimit) Heads up for anyone following my videos to install kubernetes on redhat

Redhat has an infinite value for ulimit by default which kubernetes inherits via the container service being used, this can result in some pods maxing out cpu and memory (such as haproxy and rabbitmq). For containerd the following fix solved the issue:

# sed -i 's/LimitNOFILE=infinity/LimitNOFILE=65535/' /usr/lib/systemd/system/containerd.service
# systemctl daemon-reload
# systemctl restart containerd
# k delete deployment <asdf>

homelab: planning next incarnation

Thinking about redeploying my homelab from scratch, perhaps switching from xenserver back to vmware. I’d like to start out with external-secrets and have all secrets in a vault right from the beginning, also curious what a 100% open source, 100% kubernetes environment would look like. Maybe two networks, one 100% kubernetes, and a 2nd for windows client systems. Here’s the k8s plan so far:

- manual setup of seed cluster
  - helm install argocd
  - argocd install clusterapi/crossplane/etc...
- seed-argocd deploy non-production cluster using vcluster or clusterapi/crossplane/etc...
  - deploy metallb & configure loadbalancer ip range (can we automate this w/ cluster deploy?)
  - add cluster to seed-argocd instance
- seed-argocd deploy production cluster using vcluster or clusterapi/crossplane/etc...
  - deploy metallb & configure loadbalancer ip range (can we automate this w/ cluster deploy?)
  - add cluster to seed-argocd instance
- seed-argocd deploy argocd to production cluster (k-prod)

- argocd configure storageclass
- argocd deploy hashicorp vault
  - configure as certificate authority
  - configure as keyvault
- argocd deploy external-secrets
  - configure to use keyvault
  - add secret 'ca-bundle.crt': public certificate authority certificate in DER format
  - *from now on all secrets to get values via external-secrets
- argocd deploy cert-manager
  - configure to use hashicorp vault as certificate authority
- argocd deploy pihole
  - configure dns1 & dns2
- argocd deploy external-dns
  - configure to use pihole as dns
- update with annotations to use external-dns & cert-manager:
  - argocd
  - vault
  - pihole
  - *from now on all ingress yaml to include annotations for external-dns & cert-manager
    - recommended: have annotations from the beginning, at this point they will start working
- argocd deploy keycloak
  - configure realm: create or import from backup
  - add secret 'default_oidc_client_secret': secret part of oidc client/secret
  - configure a user account (or configure federation via AD, openldap, etc...)
- deploy all other apps
  - oidc client_secret should come from external-secrets in all apps configured with oidc
    - this might require an init container for some apps

- pvc storage for all clusters
- block storage can be used for vm disks (making for easy hotswap)
- upgrade to 2 10gb ports on each host system

wdc: (kubevirt in theory but think i'll stick w/ a vm)
- domain controller
- user management
- dhcp
- wds
- wsus using dev sqlserver & data stored on e drive