As I take the deep dive into kubernetes what I’m finding is, though definitely a container management system, it can also been seen as a controller yaml processing engine. Let me explain.
Kubernetes understands what a deployment is, and what a service is, these are defined as yaml and loaded. Deployments and services can be seen as controllers which understand those types of objects defined in yaml.
What is interesting about this is that we can implement our own controllers. For example, I could implement a controller that understands how to manage a tic-tac-toe game. That controller could also implement an ai that knows how to play the game. In the same way you can edit a deployment you could edit the game and the kubernetes infrastructure could respond to the change. Or, a move could be another type recognized by the game controller, so you could create a move associated with a game in the same way you can create a service associated with a deployment.
You can imagine doing a ‘k get games’ and seeing the games being played listed out. As well as ‘k describe game a123’ to get the details and status of the game.
Seems I’m not the only one who has started thinking down this line. A quick Google search reveals agones.
This is fascinating and gives me a lot of ideas on how I might reimplement my list processing server & generic game server, within the kubernetes framework.
My first helm chart, a fun milestone. Used it to install my new docker container uploaded to quay.io this morning.
Nice feeling to give back to the open source community.
Now to automate:
* watch for wireguard updates & release an updated docker image
* watch for a centos-8-stream update & release an updated docker image
* watch for a helm chart update & update what is necessary for those changes to be seen
But first, time to investigate and implement longhorn.
The Internet Connection Sharing tab can be found by viewing the Ethernet Properties of a network connection. This can be used to select a Hyper-V virtual switch in order to enable a Hyper-V virtual switch of the “Internal” network style Internet access via the Hyper-V server.
But what if you’d like to have more than one lab network setup in this way? PowerShell to the rescue:
If you implement infrastructure as code with something such as ansible, you will be able to spin up a new VM and automatically setup the software.
An interesting side effect of this is you can wipe and reload a system, running say System Center, over night just before the trial period ends. Perhaps this a gray area, getting around the intention of a trial period, but an interesting side effect none the less.
Using infrastructure as code requires that you back up everything, in order to restore things if desired. This means the home pc can also be reloaded at will, if network drives are used or some other backup method is used, leading to the fastest performance possible when otherwise a PC has the tendency to slow down over time.
Modify script to work in an SELinux enabled environment
Modify script to work with multisite
Modify script to allow a larger upload size
Run Docker Compose
Configure WordPress for multisite
Setup DNS entries & modify sites to use full domain names
What is great about this Docker setup is by stopping and then restarting the Docker Compose, WordPress will automatically be updated if a new release has come out. With a High Availability setup, you can restart one WordPress instance at a time, allowing for no downtime during upgrades.
# install docker sudo yum -y install docker
# add user who will be running docker to docker group sudo usermod -aG docker travis
# enable docker to start up at boot & start sudo systemctl enable docker sudo systemctl start docker
# install pip which will be used to install docker-compose sudo yum -y install python-pip
# create directory to store docker compose configuration & change to it
mkdir -p docker/wordpress
# create uploads.ini file which is used to configure a larger upload size
cat > uploads.ini
upload_max_filesize = 64M
post_max_size = 64M
memory_limit = 400M
file_uploads = On
max_execution_time = 180
# create docker-compose.yaml file
# original version: https://www.youtube.com/watch?v=pYhLEV-sRpY
cat > docker-compose.yaml
Notes on docker-compose.yaml: 1. MySQL is loaded via Container image. Data is stored in local volume. 2. MySQL & WordPress are able to interact via a local network wpsite. 3. With a Multisite configuration port 80 is required. 4. “Privileged” must be used as we are using an enforced SELinux environment.
# create directory to store wordpress files
# Start up WordPress via docker-compose
docker-compose up -d
# Configure firewall
firewall-cmd --permanent --add-port 80
Next a configuration file must be modified: ./html/wp-config-sample.php
Locate the string “That’s all, stop editing! …” and add the following lines after:
After logging into the WordPress site for the first time browse to: Tools -> Network, choose subdomains or subfolders and follow directions.
Finally, you can use a domain instead of a subdomain or subfolder by first creating a site with a subdomain or subfolder then, after configuring the domain in DNS, go back to the site and modifying the URL with the domain. For example: wp-1.com/travisloyd –> www.travisloyd.xyz