Skip to main content

Kubernetes


Kubernetes is a tool with which containers and container constructs can be operated at high scale. With Kubernetes, we distinguish between different "modules":

  • Namespace

  • Deployment

  • Pod

  • Persistent Volume & Persistent Volume Claim

  • Service

  • Ingress

Let's start with the first one:

Namespace

This is a virtual demarcation between different deployments, pods, volumes, services and ingress entries. You can use a namespace to "separate" everything from each other or divide it into different groups that cannot communicate with each other.

Deployment

A deployment is the definition of one or more containers to be run and the number of replicas (replicas are identical setups or pods running on other parts of the Kubernetes cluster), where we define the volumes and how they should be mounted based on a #PersistentVolumeClaims for each container.

Pod

A pod is the "unit" or structure created by a deployment definition, you have the option to create pods manually, but in most cases it makes more sense to create them automatically through the deployment and the Kubernetes cluster.

Persistent Volume & Persistent Volume Claim

This is a virtual permanent (as long as the deployment or the #PersistentVolumeClaim exists) disk similar to Docker Volumes only (are / can) be distributed or accessible over the entire cluster. "By default" Kubernetes has only "HostPath" available for PVC's. But thanks to a useful tool called Longhorn we can create distributed volumes which are then automatically distributed across all nodes of the cluster appropriately. The setup is as easy as with Docker volumes.

Service

A service is a definition by which we can make a deployment or a container available in this deployment. It is possible to bring in multiple containers from different deployments if they have a specific label tag or if it is just a replica. So [[Kubernetes]] has a load balancer system already built in. Containers can alternatively be published to a port of one or more nodes via a service using [[Network Address Translation]].

Ingress

This is an ingress entry for the ingress controller which is already running on most [[Kubernetes]] Clusters already running. In the instructions below for setting up a simple cluster, I used the [[Kubernetes]] Distribution K3s which brings Traefik as an ingress controller. This is used at the end of the chain (or the beginning, depending on where you are looking from) to receive all network traffic and then forward it to the correct services.




Structure of a Kubernetes cluster

Kubernetes Structure

Each cluster always has at least one "control" node, or "master" node, while the number of "worker" nodes is variable, since "control" nodes can also serve as "worker" nodes. Since we like to run our software separately from each other, we go and create a namespace:

apiVersion: v1
kind: Namespace
metadata:
name: <namespace name>
labels:
name: <namespace name>

For our deployment to work we need a Persistent Volume Claim where in the example our configuration of Heimdall is stored in it.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: heimdall-pvc
namespace: <namespace name>
spec:
accessModes:
- ReadWriteMany
storageClassName: longhorn
resources:
requests:
storage: 1Gi

Now that we have our Persistent Volume Claim, we can create our deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: heimdall
namespace: <namespace name>
spec:
selector:
matchLabels:
app: heimdall
template:
metadata:
labels:
app: heimdall
spec:
containers:
- name: heimdall
image: linuxserver/heimdall
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
volumeMounts:
- name: heimdall-vol
mountPath: /config
volumes:
- name: heimdall-vol
persistentVolumeClaim:
claimName: heimdall-pvc

Now only the service and the Ingress entry are missing, we start here in our case with the service:

apiVersion: v1
kind: Service
metadata:
name: heimdall-service
namespace: <namespace name>
spec:
selector:
app: heimdall
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80

And last but not least, the Ingress entry:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: heimdall
namespace: <namespace name>
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: heimdall-service
port:
number: 80

after we have installed all "modules" we should be able to see the software Heimdall via the IP address of one of our nodes.