My Kubernetes Journey Part 4 – Deploying the application

This part of the journey marks over two months of reading my previously linked book, and learning how to setup my own cluster and storage/network components. While I wanted to continue on adding applications I use, like Pihole, Uptime Kuma, and Nextcloud, I thought a first “easy” deployment would be with Librespeed. While I could have run this all in one container, I wanted to explore the micro-service architecture idea and have multiple pods running. Seeing this all finally work was super rewarding!!

Kubernetes concepts covered

While I know I haven’t fully scratched the surface with all the possible Kubernetes components, I tried to take the main concepts that I learned and used them here. I will go over the few that I needed to use. The full Kubernetes MySQL database manifest is posted here, which I reference in sections below.

Services

Remember that container IPs and even containers themselves are stateless. Containers could spin up on one host and get one IP, but it may crash and is recreated on another with a different pod IP space. So, I asked myself, how would my web app have a consistent backend database to send to?

This is where services come into play that will give a consistent IP and DNS name to which other pods can communicate. In my head at least, this operates similar to a load balancer with a front end IP (that stays the same for the duration of the deployment) and a back end which will map to different backend containers. Below is part of the full manifest which I worked through, linked here.

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: speedtest-db

This simply tells Kubernetes to create a service named “MySQL” on port 3306, and to choose back ends which have a label of app: speedtest-db. This will make more sense in the Container Definition section.

Config Map

Config maps have many useful instances. They can be used for environmental variables and, in my case, init configuration commands. As a part of the Librespeed package, a MySQL template is published, which I used to create a table within a database to prepare for data from speed tests to be stored.The challenge was then when the MySQL container first deployed, I needed this template to be applied so upon start the database was ready to go. This was accomplished via config maps and an init.sql definition. I’ll only post part of the config map here, as the full file is in the repository linked above:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-initdb-config
data:
  init.sql: |
    use kptspeedtest;
    SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
    SET AUTOCOMMIT = 0;
    START TRANSACTION;
    SET time_zone = "+00:00";

The only addition from Librespeed was to first select the database “kptspeedtest” The rest is just a copy and paste of their template.

Persistent Volume Claim

In a previous post, I went over my storage setup for persistent storage in Kubernetes. I needed this so when the mysql container is either restarted/moved/deleted etc, the data is still there. The PVC’s job is to request a Persistent Volume for a container from a Storage Class. In my example I already have the SC defined, so I create a PVC for a 20gig storage block:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: speedtest-db-pvc
  annotations:
    volume.beta.kubernetes.io/storage-class: "freenas-nfs-csi"
spec:
  storageClassName: freenas-nfs-csi
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

Pod Definition

Here is where all the components came together for the pod definition. I’ll step through this as it is a longer manifest:

apiVersion: v1
kind: Pod
metadata:
  name: speedtest-db
  labels:
    app: speedtest-db

Here, I made a pod named “speedtest-db” and applied a label of app: speedtest-db. Remember from the service definition I used the same name? This is how the service knows how to target this container.

spec:
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: speedtest-db-pvc
  - name: mysql-initdb
    configMap:
      name: mysql-initdb-config

Next under spec.volumes, I first associated the PVC. This references the PVC name above. Then I applied the configMap using the name of the config map here:

 containers:
  - name: speedtest-db
    image: docker.io/mysql:latest
    ports:
    - containerPort: 3306
    env:
    - name: MYSQL_DATABASE
      value: "kptspeedtest"
    - name: MYSQL_USER
      value: "speedtest"
    - name: MYSQL_PASSWORD
      value: "speedtest"
    - name: MYSQL_ROOT_PASSWORD
      value: "speedtest"
    volumeMounts:
      - mountPath: /var/lib/mysql
        name: data
      - name: mysql-initdb
        mountPath: /docker-entrypoint-initdb.d

Then, I gave the definition of the image/ports/environment variables and volumeMounts here. Note, the environment variables you would most likely use are Config Secrets/Config Maps for a more secure/traditional setup.

The volumeMounts are what I used to mount the PV under /var/lib/mysql using the data label, and then provide the initdb config map that was created earlier to prepare the database.

Speedtest Web Server Deployment

Again, the full manifest is linked here. This example is a Deployment, which is to control the lifecycle/scaling/down scaling of a pod. Technically it’s not needed, but I was throwing in some concepts I had previously learned.

Load Balancer Service

Just like I needed a consistent IP to reach the back end MySQL, I also need a way for consistent and externally accessible entrance into the pods.

apiVersion: v1
kind: Service
metadata:
  name: speedtest-lb
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: speedtest-app
  externalIPs:
  - 192.168.66.251
---

This creates a service of type LoadBalancer, on port 80, and using a defined external IP. This IP then is advertised by Kube Router via BGP to my network to allow routing to the pod. I manually specified this for now, but I hope to add logic in the future to tie into my ipam system and provide next available IP settings.

I will not go over the deployment part of the file, as those concepts I am still testing and learning about.

Onto the container I simply defined the image, a label, and expose it via port 80:

template:
    metadata:
      labels:
        app: speedtest-app
    spec:
      containers:
      - name: speedtest-app
        image: git.internal.keepingpacetech.com/kweevuss/speedtest:latest
        ports:
        - containerPort: 80

Deployment

Now it was time to finally give this a go!

Creating the speedtest-db container:

kubectl apply -f speedtest-db-storage-pod.yaml 

persistentvolumeclaim/speedtest-db-pvc created
pod/speedtest-db created
configmap/mysql-initdb-config created
service/mysql created

Several components were created: a PVC, the pod itself, a config map, and the mysql service. I verified with some show commands:

kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE

speedtest-db-pvc Bound pvc-54245a26-9fbe-4a8f-952e-fcdd6a25488b 20Gi RWO freenas-nfs-csi 63s
kubectl get pv | grep Bound

pvc-54245a26-9fbe-4a8f-952e-fcdd6a25488b   20Gi       RWO            Retain           Bound      default/speedtest-db-pvc   freenas-nfs-csi   <unset>                          90s
kubectl get svc
NAME         TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)    AGE
mysql        ClusterIP   192.168.100.145   <none>        3306/TCP   115s

Above, I saw what I expected to create. The important piece is that the MySQL service with my instance of a cluster IP of 192.168.100.145

To view the container progress, I ran kubectl get pods to see the container start. One thing I have learned is the init config can take some time to process, which you can see through the logs with kubectl describe or running kubectl logs.

kubectl describe pod speedtest-db

Events:
  Type     Reason                  Age                    From                     Message
  ----     ------                  ----                   ----                     -------
  Warning  FailedScheduling        2m32s (x2 over 2m42s)  default-scheduler        0/4 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
  Normal   Scheduled               2m30s                  default-scheduler        Successfully assigned default/speedtest-db to prdkptkubeworker04
  Normal   SuccessfulAttachVolume  2m29s                  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-54245a26-9fbe-4a8f-952e-fcdd6a25488b"
  Normal   Pulling                 2m17s                  kubelet                  Pulling image "docker.io/mysql:latest"
  Normal   Pulled                  2m16s                  kubelet                  Successfully pulled image "docker.io/mysql:latest" in 780ms (780ms including waiting). Image size: 601728779 bytes.
  Normal   Created                 2m15s                  kubelet                  Created container speedtest-db
  Normal   Started                 2m15s                  kubelet                  Started container speedtest-db

Through viewing the logs, you can see the different stages of the service. First it will start, but it will eventually stop and run the init commands we passed in via the configMap.

2024-11-04 00:14:47+00:00 [Note] [Entrypoint]: Creating database kptspeedtest
2024-11-04 00:14:47+00:00 [Note] [Entrypoint]: Creating user speedtest
2024-11-04 00:14:47+00:00 [Note] [Entrypoint]: Giving user speedtest access to schema kptspeedtest

2024-11-04 00:14:47+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init.sql

Finally:

2024-11-04 00:14:51+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up.

2024-11-04T00:14:58.424295Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '9.1.0'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server - GPL.

I needed to verify that the service is bound to this container:

kubectl describe svc mysql
Name:                     mysql
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=speedtest-db
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       192.168.100.145
IPs:                      192.168.100.145
Port:                     <unset>  3306/TCP
TargetPort:               3306/TCP
Endpoints:                10.200.1.34:3306
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>

Seeing the “Endpoints” filled in, with the IP of this container is a good sign and traffic entering the DNS name of “MySQL” was be sent to the back end endpoint.

I created the web server container:

kubectl apply -f speedtest-deploy.yaml 
service/speedtest-lb created
deployment.apps/speedtest-deploy created

It was a little easier with two components, the load balancer and the pod, created.

Now I could go to the load balancers external IP and see it was working!

From my router, I saw the external IP was advertised via BGP. But, since my router does not support ECMP in overlay VPRN services, only one is active :(, otherwise it could load balance to any of the three worker nodes’ load balancer services.

*A:KPTPE01# show router 101 bgp routes 192.168.66.251/32 
===============================================================================
 BGP Router ID:10.11.0.2        AS:64601       Local AS:64601      
===============================================================================
 Legend -
 Status codes  : u - used, s - suppressed, h - history, d - decayed, * - valid
                 l - leaked, x - stale, > - best, b - backup, p - purge
 Origin codes  : i - IGP, e - EGP, ? - incomplete

===============================================================================
BGP IPv4 Routes
===============================================================================
Flag  Network                                            LocalPref   MED
      Nexthop (Router)                                   Path-Id     Label
      As-Path                                                        
-------------------------------------------------------------------------------
u*>i  192.168.66.251/32                                  None        None
      192.168.66.171                                     None        -
      65170                                                           
*i    192.168.66.251/32                                  None        None
      192.168.66.172                                     None        -
      65170                                                           
*i    192.168.66.251/32                                  None        None
      192.168.66.173                                     None        -
      65170                                                           

Exploring the database connection and data

Then, I attached to the Kubernetes pod web server to try to connect to the database, and watched as results were loaded. First, I attach to the pod directly:

kubectl exec -it "pod_name" -- /bin/bash

My pod was named speedtest-deploy-6bcbdfc5b7-5b8l5

From the docker image build, I installed mysql-client to try to connect to the database.

mysql -u speedtest -pspeedtest -h mysql


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

I was in! I simply connected via the login details that were passed in the environment variables of the speedtest database, and used the service name of “mysql” I was able to connect. Just like the web server’s config!

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| kptspeedtest       |
| performance_schema |
+--------------------+
3 rows in set (0.04 sec)

mysql> use kptspeedtest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+------------------------+
| Tables_in_kptspeedtest |
+------------------------+
| speedtest_users        |
+------------------------+

mysql> select * from speedtest_users;
Empty set (0.00 sec)

At this point, I saw the database named “kptspeedtest” with the table created from the MySQL template from Librespeed. Since I had not run any tests yet, there was no data.

After running a speed test, the results are displayed on screen. The idea from the application is you could copy the results URL and send to someone else to view in a browser themselves. When I did the same query I saw data in the database!

mysql> select * from speedtest_users;
+----+---------------------+----------------+------------------------------------------------------------------------+-------+----------------------------------------------------------------------------------+----------------+---------+---------+------+--------+------+
| id | timestamp           | ip             | ispinfo                                                                | extra | ua                                                                               | lang           | dl      | ul      | ping | jitter | log  |
+----+---------------------+----------------+------------------------------------------------------------------------+-------+----------------------------------------------------------------------------------+----------------+---------+---------+------+--------+------+
|  1 | 2024-11-04 00:26:38 | 192.168.66.171 | {"processedString":"10.200.1.1 - private IPv4 access","rawIspInfo":""} |       | Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:132.0) Gecko/20100101 Firefox/132.0 | en-US,en;q=0.5 | 1198.82 | 1976.27 | 2.00 | 0.88   |      |
+----+---------------------+----------------+------------------------------------------------------------------------+-------+----------------------------------------------------------------------------------+----------------+---------+---------+------+--------+------+

I know it will be hard to read, but you can see id=1, and the client information along with the upload/download/jitter/ping etc.

The first time seeing this work felt like a great accomplishment to get this far. Like I have said through this journey, I know there are enhancements to be made, like Config Secrets. My latest idea of using Init containers to check that the speedtest-db pod is started and the init commands have all run successfully before the web server pod is started.

I hope if you stumbled upon this, you found it useful and that it gave you hope to build your own cluster!

My Kubernetes Journey Part 3- Building Docker Containers

For many years, I have run several containers within my Homelab, such as Uptime Kuma, Librespeed, and Pihole. One thing that always intrigued me was building my own container. Now that I was attempting my first Kubernetes deployment, I thought this would be a good time to understand how to do the process from the beginning.

First, I installed the docker client for my OS. For the desktop OS’s, these are branded Docker Desktop.

Container Registry

Before you create a container image, you must decide where to store it. This is because docker/kubernetes expect images to come from a container registry. For my learning and own development, I use Gitea self-hosted.

There were not really any prerequisites, other than it seemed to be the Gitea instance needed to have https enabled to execute all the docker commands. Myself being lazy and using it locally anyways, I had mine only on HTTP, but this finally got me to have Gitea behind my HAProxy instance to have a valid SSL frontend. Docs for Gitea and container registry are here, and once below the image is built I will show the instructions to push it to the registry.

Docker file

The docker file is a list of commands/instructions to build the Docker image. It is executed top down, and builds layers to make a full image. Of course, Docker themselves can explain it best.

Fair warning: my example I am sure this is not efficient, nor lightweight but I wanted to try to use something I was at most familiar with to get started. I’m interested as time goes on and my learning grows to better shape these with smaller images, and more efficient commands.

My Dockfile is linked on my personal github. I will step through the important parts:

First I choose to use Ubuntu 22.04 as the base as it is a distribution I am most familiar with.

FROM ubuntu:22.04

Next, I refreshed package repositories and then installed the various packages needed for Librespeed to work with a mysql backend. The DEBIAN_FRONTEND=noninteractive instructions php to install non interactivity, as it will ask for a timezone on default install and not let the command ever finish:

RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install tzdata apache2 curl php php-image-text php-gd git mysql-client inetutils-ping php-mysql

Then from the librespeed package maintainers, there were several configuration changes to be made to PHP, which cloned the package locally:

#Set post max size to 0
RUN sed -i 's/post_max_size = 8M/post_max_size = 0/' /etc/php/8.1/apache2/php.ini 

#Enable Extensions
RUN sed -i 's/;extension=gd/extension=gd/' /etc/php/8.1/apache2/php.ini

RUN sed -i 's/;extension=pdo_mysql/extension=pdo_mysql/' /etc/php/8.1/apache2/php.ini

#Prep directory for web root
RUN rm -rf /var/www/html/index.html 

#Clone
RUN git clone https://github.com/librespeed/speedtest.git /var/www/html

Next, as I wanted to save results to permanent storage, I changed the telemetry_settings.php file with the various db username/password, db name:

RUN sed -i 's/USERNAME/speedtest/' /var/www/html/results/telemetry_settings.php

RUN sed -i 's/PASSWORD/speedtest/' /var/www/html/results/telemetry_settings.php

RUN sed -i 's/DB_HOSTNAME/mysql/' /var/www/html/results/telemetry_settings.php

RUN sed -i 's/DB_NAME/kptspeedtest/' /var/www/html/results/
telemetry_settings.php

If not clear, my values are: username = speedtest, password = speedtest = dbhostname = mysql (important note here: this is the Kubernetes service name I create later!) and db_name = kptspeedtest.

Skipping some of the basic cleanup/permission settings which you can see in the file, in the end I instructed the container to list on port 80, and to run Apache server as the foreground so the container does not just stop after it is started. There is also a health check for the container to simply see if the Apache service is running.

EXPOSE 80
CMD ["apachectl", "-D", "FOREGROUND"]
HEALTHCHECK --timeout=3s \
  CMD curl -f http://localhost/ || exit 1

Building the image

With the file saved as “Dockerfile” I could build the image. First, I logged in to the container registry (in my case my Gitea instance):

docker login git."your_domain_name".com

Next, it would normally be as simple as running docker build -t and name of the repo, and image such as:

# build an image with tag
docker build -t {registry}/{owner}/{image}:{tag} .

I ran into some errors with the next step of uploading, and searching around found that adding the flag “–provenance=false” does not include certain metadata about the build and makes it compatible with Gitea. Depending on the container registry, it may or may not be needed. Full example:

docker build -t git."domain_name"/kweevuss/speedtest:latest . --provenance=false

Since the file is named Dockerfile, docker build tried to find that file in the directory. My user is “kweevuss” and this image is being called speedtest and using the “latest” tag.

Then I pushed the image to the registry:

docker push git.internal.keepingpacetech.com/kweevuss/speedtest:latest

Within Gitea then, I could see the image related to my user:

Now on any docker install, I simply pulled the image as:

docker pull git.internal.keepingpacetech.com/kweevuss/speedtest:latest

Or, instead, in a kubernetes deployment/pod manifest simply point to the image:

spec:
  containers:
  - name: prdkptspeedtestkube
    image: git."domain_name"/kweevuss/speedtest:latest

Next installment will finally be setting up this custom image along with another container, mysql, to store the data!

My Kubernetes Journey – Part 2 – Storage Setup

In this installment, I will be going over how I set up my external storage to Kubernetes for persistent storage. As with any container platform, the containers are meant to be stateless and do not store data inside the container. So if there is persistent data needed, it needs to be stored externally.

As mentioned in my first post, my book of reference to learn Kubernetes did touch on storage, but does limit it to cloud providers. Which wasn’t as helpful for me, because, of course, in the spirit of Homelab we want to self-host! Luckily there are some great projects out there that we can utilize!

I use TrueNAS core for my bulk data and VM-based iSCSI storage. I was pleasantly surprised to find that TrueNAS does support Container Storage Interface (CSI), so I cover this method below.

Installing CSI plugin

I found this great project, democratic-csi. The readme has great steps and examples using TrueNAS, so I will not duplicate here for the sake of simply re-writing the documentation. I am personally using NFS for storage as it seemed more straight forward and with iSCSI it for my back end of all my VM storage from Proxmox. I would rather not modify that config extensively to risk that setup for such an import piece of the lab.

First, I configured TrueNAS with the necessary SSH/API and NFS settings and ran the helm install:

helm upgrade   --install   --create-namespace   --values freenas-nfs.yaml   --namespace democratic-csi   zfs-nfs democratic-csi/democratic-csi

My example freenas-nfs.yaml file is below:

csiDriver:
  name: "org.democratic-csi.nfs"

storageClasses:
- name: freenas-nfs-csi
  defaultClass: false
  reclaimPolicy: Retain
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    fsType: nfs
      
  mountOptions:
  - noatime
  - nfsvers=4
  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
    node-publish-secret:
    controller-expand-secret:

driver:
  config:
    driver: freenas-nfs
    instance_id:
    httpConnection:
      protocol: http
      host: 192.168.19.3
      port: 80
      # This is the API key that we generated previously
      apiKey: 1-KEY HERE
      username: k8stg
      allowInsecure: true
      apiVersion: 2
    sshConnection:
      host: 192.168.19.3
      port: 22
      username: root
      # This is the SSH key that we generated for passwordless authentication
      privateKey: |
        -----BEGIN OPENSSH PRIVATE KEY-----
        KEY HERE
        -----END OPENSSH PRIVATE KEY-----
    zfs:
      # Make sure to use the storage pool that was created previously
      datasetParentName: ZFS_POOL/k8-hdd-storage/vols
      detachedSnapshotsDatasetParentName: ZFS_POOL/k8-hdd-storage/snaps
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: root
      datasetPermissionsGroup: wheel
    nfs:
      shareHost: 192.168.19.3
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      shareMaprootUser: root
      shareMaprootGroup: wheel
      shareMapallUser: ""
      shareMapallGroup: ""

The above file, I felt, is very well-documented from the package maintainers. I had to input API/SSH keys, and the import piece is the dataset information:

datasetParentName: ZFS_POOL/k8-hdd-storage/vols AND
detachedSnapshotsDatasetParentName: ZFS_POOL/k8-hdd-storage/snaps

This was dependent on what I had setup in TrueNAS. My pool that looks like this:

Testing Storage

With the helm command run, I saw a “Storage Class” created in Kubernetes:

The name comes from the yaml file above. The Kubernetes Storage class is the foundation, and its job is to help automate the storage setup from containers that request storage. These all have specific names like Persistent Volume Claims and Persistent Volumes, which I will get to. The Storage class essentially uses a CSI plugin (which is API) to talk to external storage systems to provision storage automatically. This way Kubernetes has a consistent way to create storage no matter the platform. It could be Azure/AWS/TrueNAS etc.

Now, to get to the interesting part: we can first create a “Persistent Volume Claim” (PVC). The PVC’s job is to organize requests to create new Persistent Volumes on Storage classes. This will hopefully help with an example:

cat create_pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc-test
  annotations:
    volume.beta.kubernetes.io/storage-class: "freenas-nfs-csi"
spec:
  storageClassName: freenas-nfs-test
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi

The above yaml file is essentially asking for 500Mb of storage from the storage class named “freenas-nfs-test”

I applied this with the normal “kubectl apply -f create_pvc.yaml”

With this applied, I saw it created, and once completed it was in a bound state:

Now to use this:

cat test-pv.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: volpod
spec:
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: nfs-pvc-test
  containers:
  - name: ubuntu-ctr
    image: ubuntu:latest
    command:
    - /bin/bash
    - "-c"
    - "sleep 60m"
    volumeMounts:
    - mountPath: /data
      name: data

In this pod, I referenced to create a volume with spec.volumes using the PVC name I created in the last step. Then under spec.containers.volumeMounts I give a mount directory inside the container to this directory.

In TrueNAS, at this point, I saw a volume created in the dataset that matches the volume ID displayed in Kubernetes

Attach to the pod:

kubectl exec -it volpod   -- bash

Inside the container now, I navigated to the /data directory and created a test file:

root@volpod:/# cd /data/
root@volpod:/data# ls
root@volpod:/data# touch test.txt
root@volpod:/data# ls -l
total 1
-rw-r--r-- 1 root root 0 Nov  3 04:11 test.txt

Just to see this work in action, now SSHing into the TrueNAS server and browsing to the dataset, we can see the file!

freenas# pwd
/mnt/ZFS_POOL/k8-hdd-storage/vols/pvc-dede93ea-d0cf-4bd7-8500-d052ce336c39
freenas# ls -l
total 1
-rw-r--r--  1 root  wheel  0 Nov  3 00:11 test.txt

In the next installment, I will go over how I created my container images. My initial idea was to use Librespeed and a second container with mysql for results persistent storage. So having the above completed gives a starting point for any persistent data storage needs!

My Kubernetes Journey – Part 1 – Cluster Setup

In this first part, my goal is to piece together the bits and pieces of documentation I found for the cluster setup and networking.

A must-read is the official documentation. I thought it was great, but be prepared for a lot reading. It’s not a one-command install by any means. I’m sure there are a lot of sources on how to automate this turn up – which is what all the big cloud providers do for you – but I wasn’t interested in that, so that won’t be covered in these Kubernetes posts.

I ran into two confusing topics on my first install of Kubernetes: the Container Runtime Environment (CRE) and the network plugin install. I will mostly cover the network plugin install below in case it helps others.

First I started out using my automated VM builds (you can find that post here) to build four Ubuntu 22.04 VMs: one controller and three worker nodes.

As you’ll see if you dive into the prerequisites for kubeadm, you have to install a container runtime. I will blame the first failure I had when I tried containerd on just not knowing what I was doing, but on my second attempt I tried Docker Engine, and this did work with success. With the instructions from Kubernetes, I was able to follow without issue.

Once the Kubernetes instructions were followed for the container runtime, the Kubernetes packages could be installed on the control node:

sudo apt-get install -y kubelet kubeadm kubectl

Now it’s time to bootstrap the cluster. If you’re using this post as a step-by-step guide, I would suggest coming back once the install and cluster is up to read up more on what kube-init does as it is intriguing.



kubeadm init --pod-network-cidr 10.200.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock 

--service-cidr 192.168.100.0/22

Dissecting what is above:

  • –pod-network-cidr – this is an unused CIDR range that I have available. By default this is not exposed outside the worker nodes, but it can be. Kubernetes will assume a /24 per worker node out of this space. It is something I would like to investigating changing – but I ran into complications and instead of debugging, I just accepted using a bigger space for growth.
  • –cri-socket – this is to instruct the setup process to use docker engine. My understanding is that Kubernetes now defaults to containerd, and if you use that CRI, this is not needed.
  • –service-cidr – I, again, decided to pick a dedicated range as the network plugin I used can announce these via BGP, and I wanted a range that was free on my network. I cover the networking piece more below.

At the end of this init process, it gave me a kubeadm join command, which is a token and other info to be able to join from the worker nodes to the controller.

kubeadm join 192.168.66.165:6443 --token <token>         --discovery-token-ca-cert-hash <cert>--cri-socket unix:///var/run/cri-dockerd

At this point, running kubectl get pods showed the worker nodes, but none were ready until there was a network plugin running and configured.

Network Plugin

I tried several network plugin projects, and ended up landing on Kube-router. This really seemed to give my end goal of being able to advertise different services or pods via BGP into my network.

I used this example yaml file from the project page and only had to make slight modifications to define the router to peer to.

For the container “kube-router” in the spec.args section, I defined the peer router ips, and ASN information. For example:

 containers:
      - name: kube-router
        image: docker.io/cloudnativelabs/kube-router
        imagePullPolicy: Always
        args:
        - --run-router=true
        - --run-firewall=true
        - --run-service-proxy=true
        - --bgp-graceful-restart=true
        - --kubeconfig=/var/lib/kube-router/kubeconfig
        - --advertise-cluster-ip=true
        - --advertise-external-ip=true
        - --cluster-asn=65170
        - --peer-router-ips=192.168.66.129
        - --peer-router-asns=64601

I made sure to adjust these settings to fit my environment. You can decide if you want cluster IPs and external IPs advertised. I did enable those but with more understanding, I only envision needing external IPs for load balancer services for example to be advertised.

I ran the deployment with:

kubectl apply -f "path_to_yaml_file"

After this, I saw several containers being made:

I saw the above containers will all be running, and when I looked at the route table I saw installed routes to the pod network cidrs to each host:

ip route
default via 192.168.66.129 dev eth0 proto static 
10.200.1.0/24 via 192.168.66.171 dev eth0 proto 17 
10.200.2.0/24 via 192.168.66.172 dev eth0 proto 17 
10.200.3.0/24 via 192.168.66.173 dev eth0 proto 17 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.66.128/25 dev eth0 proto kernel scope link src 192.168.66.170

Check out the kube-routers documentation for more info, but essentially the kube router containers formed BGP peers by listening to worker nodes join the pool, and creating routes so pods on different worker nodes can communicate.

I also noticed coredns containers should start, and “kubectl get nodes” showed all the nodes in a ready state:

kubectl get nodes
NAME                  STATUS   ROLES           AGE   VERSION
prdkptkubecontrol02   Ready    control-plane   16d   v1.31.1
prdkptkubeworker04    Ready    <none>          16d   v1.31.1
prdkptkubeworker05    Ready    <none>          16d   v1.31.1
prdkptkubeworker06    Ready    <none>          16d   v1.31.1

At this point I had a working Kubernetes cluster!

My Kubernetes Journey

In this post, I want to document my learning from knowing nothing to deploying a multi-node Kubernetes cluster that is currently running a Librespeed test server with a mysql back end.

A recent product release from Nokia, Event Driven Automation (EDA), which is based around many concepts of Kubernetes for network automation, piqued my interest in exploring Kubernetes itself. As the tool does not require any Kubernetes knowledge (although the software does run on a K8s cluster), I wondered if it was beneficial. In short, my purpose in learning about Kubernetes was to understand the design physiology and how to support the deployment of EDA.

Disclaimer: I am a network engineer, not a dev ops master so I only slightly know what I’m doing in this space and I know I have a lot to learn to be more efficient, but maybe others just entering into the space can find some ideas in my journey.

I would not be where I am today without The Kubernetes Book: 2024 Edition by Nigel Poulton. I thought it was a very great introduction and explanation of the concepts. The only thing it didn’t cover that I wanted was setting up a Kubernetes cluster from scratch, but the Kubernetes official documentation helped where it lacked.

The following parts are listed below, where I try to go from 0 to a deployed Kubernetes cluster, all self-hosted in my homelab!

Part 1

Part 2

Part 3

Part 4