Kubernetes allows developers and sysadmins to easily create large interconnected systems of dockers across multiple nodes within a given network. It is quite useful and an essential part of the modern system stack.
To get started, install kubectl. On Ubuntu 18.04 LTS it can be done like so:
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo \
apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo \
tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
On Arch Linux it can be done like so:
sudo pacman -S kubectl
The rancher rke tool is the fastest way to setup a K8s cluster on any machine. Download it and install it using the following commands:
wget https://github.com/rancher/rke/releases/download/v0.3.0-rc9/rke_linux-amd64
sudo mv rke_linux-amd64 /usr/local/bin/rke
sudo chown root:root /usr/local/bin/rke
sudo chmod 0755 /usr/local/bin/rke
Kubernetes needs a ssh key in order to interact with it. Create a simply passphraseless one, and ensure you have an ssh server running on port 22:
ssh-keygen -f ~/.ssh/k8s_key -t ed25519
cat ~/.ssh/k8s_key >> ~/.ssh/authorized_keys
sudo systemctl enable sshd
sudo systemctl restart sshd
Create a /kubernetes folder, assign ownership of that folder to your user, add a file called cluster.yml to that folder:
mkdir /kubernetes/
vim /kubernetes/cluster.yml
Fill that file with the below contents:
cluster_name: simple-k8s-cluster
network:
plugin: flannel
ssh_key_path: /home/USERNAME/.ssh/k8s_key
ignore_docker_version: true
nodes:
- address: 10.0.0.3
internal_address: 10.0.0.3
hostname_override: 10.0.0.3
port: 22
user: USERNAME
role: [controlplane,worker,etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
health_check:
port: 6443
request_line: GET /healthz HTTP/1.1
kube-api:
service_node_port_range: 30000-32767
Replace USERNAME with the username of your Linux user, and replace 10.0.0.3 with the IP address of your ethernet card. Then cd to the /kubernetes directory and run these commands to spin up your Kubernetes cluster using rke:
rke up --config cluster.yml
The process is slow, but eventually you will end up with a Kubernetes cluster if no errors occur.
Now Kubernetes by itself isn't too much more than docker, however, a number of plugins and extensions exist to expand the functionality. One of them, Metal LB, allows both easy load balancing of deployed pods but also the ability to assign IP ranges (both public and private) to a given deployment.
Install Metal LB onto your Kubernetes cluster like so:
kubectl --kubeconfig=/kubernetes/kube_config_cluster.yml apply -f \ https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml \ --namespace=metallb-system
With Metal LB installed, create a layer-2 setup in order to specify which IP addresses can be used for deployments. In this example, it is assume your router is setup to treat 10.0.0.128-10.0.0.253 as an empty, static range. It can be done like so:
mkdir /kubernetes/deployments/
vim /kubernetes/deployments/layer2.yml
Add the following content to that Yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.0.0.128-10.0.0.253
Assuming that IP range is accessible to your Kubernetes cluster, it ought to be possible to add this to your cluster like so:
kubectl --kubeconfig=/kubernetes/kube_config_cluster.yml apply -f \ /kubernetes/deployments/layer2.yml
If there were no errors, then this installed correctly. Nonetheless consider testing it with some sample deployments. To do this, you'll need a pod and a service. Both can be done with Kubernetes deployment Yaml files.
Create a sample pod deployment like so:
vim /kubernetes/deployments/generic-pod-metallb.yml
Fill it with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: generic-deployment
spec:
selector:
matchLabels:
app: generic-deployment
replicas: 2
template:
metadata:
labels:
app: generic-deployment
spec:
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
ports:
- containerPort: 80
Create a sample service like so:
vim /kubernetes/deployments/generic-service-for-pod-metallb.yml
Fill it with the following:
apiVersion: v1
kind: Service
metadata:
name: generic-deployment
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
- targetPort: 80
selector:
app: generic-deployment
type: LoadBalancer
loadBalancerIP: 10.0.0.129
The deployment file creates a container, which consists of a generic nginx image with the latest stable, and exposes it to port 80 on the internal docker network. The service file assigns the image within the pod an IP address of 10.0.0.129 and then opens port 80 on that address.
Attempt to apply those two files to your cluser by running the following commands:
kubectl --kubeconfig=/kubernetes/kube_config_cluster.yml apply -f \
generic-nginx-metallb.yml
kubectl --kubeconfig=/kubernetes/kube_config_cluster.yml apply -f \
generic-service-metallb.yml
If no errors occurred, then direct your browser to 10.0.0.129 to check if you can see the default nginx landing page. That's it, you now have a load balanced website with an assigned IP and two replicas.