Monday, 17 September 2018

Local Kubernetes Cluster on CentOS7 with Vagrant and VirtualBox on your Windows Laptop Part I

What this post is about

This post is about setting up a real kubernetes cluster consisting of 1 master node and as many worker nodes as you like (or your machine can spin up) within a local network on your laptop.

You can easily spin up a kubernetes minkube on your machine, but may be you prefer something a little bit closer to the real thing, like running each node on a separate VM connected via some private network within your host.

I promise that setting up a cluster of consisting of one master node and two worker nodes will be as easy as using minikube.

This post consists of three parts:
  1. Set up the cluster "out of the box" (this post)
  2. Explaining the Vagrantfiles for master and workers in detail
  3. Customizing the Vagrantfiles 

What you need up front

I'm doing this on my Windows 10 machine using VirtualBox for spinning up the VMs that constitute the cluster nodes and Vagrant to automate VirtualBox.
All nodes will be running CentOS 7

On your Windows machine, you need to have a virtualization software running. I'm using VirtualBox.
Additionally you need to have Vagrant installed.

In case you do not have these installed already, they are really easy to get. Go to their respective download sites, download the installers and install first VirtualBox, then Vagrant. I'm using this combination for years and never encountered any serious problems.

Why Vagrant?

Using Vagant, we can automate the process of provisioning the nodes.
Once we have created the "Vagrantfile" for the master node, one command, "vagrant up", will spin up VM running the master.
Each worker node we want to add, has its respective and even less complex Vagrantfile in another directory and can be started with another "vagrant up" respectivly.

All provisioning is done within the vagrantfiles for master and workers respectively.
Every step performed there, will be explained below.

Part I: Setting Up The Cluster "As Is"

Check Your Environment

As mentioned you need VirtualBox (or some other virtualization tool) and Vagrant running on your machine.
To check your environment, open a shell (I'm using Git Bash in all examples) and type "vagrant version":
Check Vagrant Version

To check your VirtualBox just open the GUI.
About VirtualBox
You can use "Check for Updates" in order to make sure you are on the latest release.

Get the Project

Open your Git Bash and create a directory for the project.
For example, let's create the directory "mycluster":
# create the "mycluster" directory:
$ mkdir /c/mycluster
$ cd /c/mycluster

Clone the project from GitHub.
This will get you the directories for the master node and tow worker nodes containing the respective Vagrantfiles.
# clone from GitHub:
$ git clone https://github.com/vkoster/k8s-local.git
Cloning into 'k8s-local'...
remote: Counting objects: 29, done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 29 (delta 10), reused 29 (delta 10), pack-reused 0
Unpacking objects: 100% (29/29), done.

Change into the "k8s-local" directory and check what you got:
# see what you got:
$ ls -al
total 24
drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ./
drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ../
drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 k8s-local/

$ cd k8s-local
$ ls -al
total 14
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 ./
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 ../
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 .git/
-rw-r--r-- 1 vkoster 1049089  96 Sep 15 23:49 .gitignore
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 cl2master/
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 cl2nfs/
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 cl2node1/
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 cl2node2/
-rw-r--r-- 1 vkoster 1049089 897 Sep 15 23:49 README.md
-rw-r--r-- 1 vkoster 1049089  55 Sep 15 23:49 TODO.md

We are interested in these directories:

  • cl2master
    this is where the master node will be created
  • cl2node1 and cl2node2
    this is where the worker nodes will be created
(Ignore the cl2nfs directory for now. Its part of another post)

Create the Master Node

Let's create the master node. Change into the master node directory and do the "vagrant up":
# creating the master node:
$ cd cl2master
$ vagrant up
...
*** a lot of terminal output ***
...
    default:
    default: Your Kubernetes master has initialized successfully!
    default:
    default: To start using your cluster, you need to run the following as a regular user:
    default:
    default:   mkdir -p $HOME/.kube
    default:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    default:   sudo chown $(id -u):$(id -g) $HOME/.kube/config
    default:
    default: You should now deploy a pod network to the cluster.
    default: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    default:   https://kubernetes.io/docs/concepts/cluster-administration/addons/
    default:
    default: You can now join any number of machines by running the following on each node
    default: as root:
    default:
    default:   kubeadm join 192.168.2.10:6443 --token 14q1vx.z9nxyuc49e9v7rot --discovery-token-ca-cert-hash sha256:c9f757102120d5f776ddb05ac533ee3fb5e5006773f81953ed4753e8000da775
    default: Schritt 11 erledigt: Achtung - den 'kubeadm join' muss man sich sichern!
    default: Step 11 done: important - you must save the kubeadm join command for joining nodes later!
    default: ------------------
    default: Schritt 12: Konfiguration ins Home-Verzeichnis von vagrant kopieren
    default:  Step 12: copy configuration in vagrant's home directory
    default: ===================================================================
    default: Schritt 12 erledigt
    default: Step 12 done
    default: -------------------
    default: Schritt 13: Overlay Netzwerk installieren (wir verwenden Flannel)
    default: Step 13: installing overlay network (we are using Flannel here)
    default: =================================================================
    default: clusterrole.rbac.authorization.k8s.io/flannel created
    default: clusterrolebinding.rbac.authorization.k8s.io/flannel created
    default: serviceaccount/flannel created
    default: configmap/kube-flannel-cfg created
    default: daemonset.extensions/kube-flannel-ds created
    default: Schritt 13 erledigt
    default: Step 13 done
    default: -------------------

Now you need a little patience.
Vagrant will bring up the machine, update it, prepare it for running as a Kubernetes Master Node and finally install and run Kubernetes.
You can track what is going on by checking the steps outlined in the Vagrantfile.

During Step 11 Kubernetes will output "join command" for joining subsequent nodes to the cluster. You should copy and save this somewhere. You will need this command and the join-token to add worker nodes to the cluster later on.

Looking at the output of the last step (Step 13), you should be able to verify that the Flannel overlay network was installed successfully. When joining nodes to the cluster, the overlay network will be extended to those new nodes automatically.

Ok, you now have Kubernetes Master Node running on your machine:

  • hostname: cl2master
  • ip: 192.168.2.10
  • running in a private 192.168.2.0/24 network
Let's check if everything is working fine by ssh-ing into the VM and listing the Pods running in the "kube-system" namespace:
# see that the master node is working:
$ vagrant ssh
[vagrant@cl2master ~]$

# you are now inside of your new vagrant box...
# check the "kube-system" namespace:
$ kubectl get pods --namespace kube-system
NAME                                READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-hhmgb            1/1       Running   0          18h
coredns-78fcdf6894-j5mkx            1/1       Running   0          18h
etcd-cl2master                      1/1       Running   0          18h
kube-apiserver-cl2master            1/1       Running   0          18h
kube-controller-manager-cl2master   1/1       Running   0          18h
kube-flannel-ds-dxgc8               1/1       Running   0          18h
kube-proxy-tmtwc                    1/1       Running   0          18h
kube-scheduler-cl2master            1/1       Running   0          18h

Apart from the AGE column, your output should be something like the above.
Type "exit" to leave the box for now and find yourself in your GitBash again.

A Master Node is quite useless by its own because without taking explicite action, Kubernetes will not schedule your deployments on a Master Node. So let's join two Worker Nodes.

Create and Join Two Worker Nodes

Simple change into the "cl2node1" directory and again type "vagrant up":
# create the first worker node
$ cd ../cl2node1/
$ ls -al
total 12
drwxr-xr-x 1 vkoster 1049089    0 Sep 15 23:49 ./
drwxr-xr-x 1 vkoster 1049089    0 Sep 15 23:49 ../
-rw-r--r-- 1 vkoster 1049089 5492 Sep 15 23:49 Vagrantfile

# now the "vagrant up":
$ vagrant up
...
default: Complete!
    default: kubelet Dienst starten
    default: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
    default: Schritt 9 erledigt
    default: Step 9 done
    default: ------------------
    default: Schritt 10: schon mal die wichtigsten Images pullen
    default: Step 10: pulling down some images for later
    default: ===================================================
    default: [config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/pause:3.1
    default: [config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18
    default: [config/images] Pulled k8s.gcr.io/coredns:1.1.3
    default: Schritt 10 erledigt
    default:  Step 10 done
    default: ------------------
Setting up a Worker Node is done after completing step 10 of the Vagrantfile. You should see something like the above at the end of the terminal output. No we have to make this node join the cluster. You do this by running the join-command generated by Kubernetes while setting up the Master Node:
# joining this node to the cluster
$ sudo kubeadm join 192.168.2.10:6443 --token 14q1vx.z9nxyuc49e9v7rot --discovery-token-ca-cert-hash sha256:c9f757102120d5f776ddb05ac533ee3fb5e5006773f81953ed4753e8000da775
...
This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
You should see something like this at the end of your terminal output. At the end, Kubernetes tells us how to check whether this worked. To do this, exit the worker node vm, re-enter the Master Node directory, ssh into the box and run the "get nodes" command with kubctl:
# exit vm
$ exit

# now back in GitBash
$ cd ../cl2master/

# enter master node
$ vagrant ssh

# now inside master vm
$ kubectl get nodes
NAME        STATUS    ROLES     AGE       VERSION
cl2master   Ready     master    22h       v1.11.3
cl2node1    Ready         21m       v1.11.3
Type "exit" to leave the master and enter GitBash again.
OK, now we have a cluster consisting of one master and one worker node. Adding the second worker node is exactly the same procedure as adding the fist. Just make sure to change into the "cl2node2" directory:
# now in GitBash - cl2master directory...
$ cd ../cl2node2

# do the "vagrant up"
$ vagrant up
... lots of noise...
...
    default: Complete!
    default: kubelet Dienst starten
    default: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
    default: Schritt 9 erledigt
    default: Step 9 done
    default: ------------------
    default: Schritt 10: schon mal die wichtigsten Images pullen
    default: Step 10: pulling down some images for later
    default: ===================================================
    default: [config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/pause:3.1
    default: [config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18
    default: [config/images] Pulled k8s.gcr.io/coredns:1.1.3
    default: Schritt 10 erledigt
    default:  Step 10 done
    default: ------------------

# enter the new node via ssh to run the "Join-Command":
$ vagrant ssh

# now inside the new node...
$ sudo kubeadm join 192.168.2.10:6443 --token 14q1vx.z9nxyuc49e9v7rot --discovery-token-ca-cert-hash sha256:c9f757102120d5f776ddb05ac533ee3fb5e5006773f81953ed4753e8000da775
...
This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
Let's check again on the master node:
# exit the node2-box:
$ exit

# change into master node directory
$ cd ../cl2master/

# enter the master node via ssh
$ vagrant ssh

# now inside master node vm...
$ kubectl get nodes
NAME        STATUS    ROLES     AGE       VERSION
cl2master   Ready     master    23h       v1.11.3
cl2node1    Ready               1h        v1.11.3
cl2node2    Ready               32m       v1.11.3

Summary

Now you habe a Kubernetes cluster consisting of one master and two worker nodes running on your lokal machine. The respective VMs are configured like so:

  • master node
    • hostname: cl2master
    • ip: 192.168.2.10
  • worker node 1
    • hostname: cl2node1
    • ip: 192.168.2.11
  • worker node 2
    • hostname: cl2node2
    • ip: 192.168.2.12
The next post will discuss the Vagrantfiles for master and worker nodes respectively.

Disclaimer

The descriptions above worked for me every time up to the time of this writing. Nevertheless I can't take any responsibility if something might not work for you now or in the future.



Friday, 11 November 2016

Docker Tag Re-Assignment and the "latest" Tag

Docker Tag Re-Assignment and the „latest“ Tag

Introduction

The Docker latest tag can be a bit of mystery.

The confusion is mostly based on the legitimate assumption that an image tagged “latest” is the last and therefore newest image added to a given repository.
This assumption, legitimate or not, is definitely wrong. The tag “lastest” is just a token like “0.1.1” or “iojdfh”.

The only thing that makes "latest" special is that Docker assumes it as a default value for its “docker commit” or “docker tag” commands in case you do not specify a tag explicitely.

The following links cover this in great detail.


Nothing left to add to these.

What I want to shed some light on here, is:
  • how Docker treats tags as pointers to images
  • how Docker distinguishes between creating and re-assigning tags

As always, if you know all that then don’t waste your time and stop reading now.
If you are not so sure, keep going. I admit that some details of tagging have been a surprise to me.

Setup

Let’s start out with creating a setup just rich enough to see the mechanics.
Creating a repository containing two versioned images will just do it.

Please note upfront:
  • I’m on Windows 10, running my docker in Vagrant powered CentOS 7 VirtualBox
  • I’m working locally, not pushing to or pulling from Docker Hub (except when running the base image)
  • I’m not using a dockerfile but create my images by committing containers

Now let’s create the images:

#
# Run Container based on base image „centos“:
$ docker run -it centos /bin/bash
#

This will start an interactive container running a bash shell and You will find yourself within this shell.
Create a file “/test.sh”, e.g. using vi, and enter this content:

#
#!/usr/bin/env bash
echo "Hello, I am version 01"

Now exit the container and commit it to a version 1 image:

#
$ docker commit -a "volker.koster@mt-ag.com" -m "Added /test.sh" \
-c 'CMD ["/bin/bash", "/test.sh"]' 333ba1982dbe vkoster/versionen:v01
#

Things to note here:
  • 333ba1982dbe is the container’s id
  • The --change option makes this container an executable that just runs the /test.sh file
  • vkoster/versionen is the name of the repository for our images
  • v01 is our version tag
Had we not specified the version tag, docker would have assigned “latest” as a default tag. Told differently, docker did not assign the “latest” tag because we specified a tag explicitely.

Now list the images and run a container to see if it works:

#
$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
vkoster/versionen     v01                 51e388437466        26 seconds ago      196.7 MB
$ docker run vkoster/versionen:v01       
Hello, I am version 01
#
(Omitting the "v01" tag now, would have caused an error, because Docker would then assume "lates", which is not present)

Image works. Let’t tag it as “latest”:

#
$ docker tag vkoster/versionen:v01 vkoster/versionen:latest
$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
vkoster/versionen     latest              51e388437466        10 minutes ago      196.7 MB
vkoster/versionen     v01                 51e388437466        10 minutes ago      196.7 MB
#

We now have one repository containing one image referenced by two tags.

Now let's create version 2 of this image.

#
$ docker run -it vkoster/versionen /bin/bash
#


(Now it is ok to omit the tag: Docker assumes "latest", which is now present)

Within the running container, edit the /test.sh file like so:
#
#!/usr/bin/env bash
echo "Hello, I am version 02"
#

Save the file, exit the container and again commit it to an image:
#
$ docker commit -a "volker.koster@mt-ag.com" -m "Changed /test.sh" \
-c 'CMD ["/bin/bash", "/test.sh"]' 5f1f1063fd2a vkoster/versionen:v02
#
List the images and run the new version, just to be sure.
#
$ docker images -a
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
vkoster/versionen     v02                 435bb586c058        5 minutes ago       196.7 MB
vkoster/versionen     latest              51e388437466        57 minutes ago      196.7 MB
vkoster/versionen     v01                 51e388437466        57 minutes ago      196.7 MB
# …and run the image:
$ docker run vkoster/versionen:v02
Hello, I am version 02
#
We still have one repository, now containing two images.
As was expected and covered in the links above, the “latest” tag did not move but still references the first image.

Now we can play around with the tags a bit.

Questions

Question: what will happen, if we try to tag the new image with the “latest” tag? 
Obviously there cannot be two “latest” tags in one repo.

Tag the new image with “latest” and display the results:
#
$ docker tag vkoster/versionen:v02 vkoster/versionen:latest
$ docker images –a
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
vkoster/versionen     latest              435bb586c058        9 minutes ago       196.7 MB
vkoster/versionen     v02                 435bb586c058        9 minutes ago       196.7 MB
vkoster/versionen     v01                 51e388437466        About an hour ago   196.7 MB
#

As it worked, this is what happened:
  • we tried assign the “latest” tag
  • Docker recognizes that the repository already contains the tag “latest”
  • therefore, the tag is not created but simply re-assigned to point to the new image
  • there is only one pointer left, “v01”, to point to the first image

Question: Are all tags treated like this or is the “latest” tag special?

This works for all tags. Again, the tag “latest” is in no way special.
You can try this yourself by re-tagging  “v01” or “v02”.

Question: What happens when we remove the last tag from an image?
#
$ docker tag vkoster/versionen:latest vkoster/versionen:v01
$ docker images -a
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
vkoster/versionen     latest              435bb586c058        57 minutes ago      196.7 MB
vkoster/versionen     v01                 435bb586c058        57 minutes ago      196.7 MB
vkoster/versionen     v02                 435bb586c058        57 minutes ago      196.7 MB
<none>                <none>              51e388437466        About an hour ago   196.7 MB
#

Docker re-assigned the pointer with no problem at all, leaving our fist image “naked”.
This image can no longer be referenced via the repository.

Takeway

The results can be summed up like this:
  • tags are just pointers to images
  • when tagging, docker checks, if the respective tag is already present within the repository
    • if the tag is not present, it is created, pointing to the respective image
    • if the tag is present, it is simply re-assigned from one image to another
  • this works for all tags; there is no special treatment for “latest” whatsoever