Monday, 15 October 2018

Local Kubernetes Cluster on CentOS7 with Vagrant and VirtualBox on your Windows Laptop Part II

Part II: Explaining the Vagrantfile for provisioning the Cluster Nodes

Part I of this blog was about setting up our Kubernetes cluster "out of the box", i.e. without customizing the Vagrantfiles.
This part will explain the Vagrantfiles in detail. Every single step will be explained, so that you are ready for part III.
Part III will then explain how to customize certain parts of the Vagrantfile, namely the hostnames and IP addresses of your nodes and how to add additional nodes to the cluster.

Updates

03.11.2018

ToDo: Changes to the Vagrantfile have to be included here.

Vagrantfiles for Master and Worker 

As you have seen in Part I, every node is started from its respective Vagrantfile located in its own directory.
This means that we have two types of Vagantfiles: a Vagrantfile to provision the master node and Vagrantfiles to provision each worker node respectively.

Vagrant offers different provisioners like Ansible or Puppet. Here I'm using the Bash provisioner, which means that you may also run the commands manually in a Bash shell instead of using Vagrant.

Provisioning the Master Node

Let's take a look at the Vagrantfile of the master node, located in the cl2master directory.
The first section has little to do with Kubernetes and is more or less self-explanatory.
I will cover this quickly:

 -*- mode: ruby -*-
# vi: set ft=ruby :


Vagrant.configure("2") do |config|
  config.vm.box = "geerlingguy/centos7"

  # Der Box einen Namen geben
  # ...und Memory und CPU für Kubernetes hochsetzen
  # Assign a name to this box
  # ...define momory and number of CPUs
  config.vm.provider "virtualbox" do |v|
      v.name = "cl2master"
      v.memory = 2048
      v.cpus = 2
  end

  # Hostname definieren
  # Set Hostname
  config.vm.hostname = "cl2master"

  # Die korrekte Zeitone einstellen:
  # Set local timezone
  if Vagrant.has_plugin?("vagrant-timezone")
     config.timezone.value = "Europe/Berlin"
  end

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.network :private_network, ip: "192.168.2.10", auto_config: false

  # Enable provisioning with a shell script. Additional provisioners such as
  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
  # documentation for more information about their specific syntax and use.

We define "geerlingguy/centos7" as our Vagrant Basebox. Everything we do from here, will be based on this image. I'm using this image since a couple of years, every time I need virtual machine based on CentOS 7. It is very popular within the vagrant community. Find everything you need to know about this box here.

The next section gives a name to the virtual machine and defines memory and cpu requirements.

Then a host name is given to the machine and its time zone is set to where I live.
One remark here: this requires a the vagrant timezone plugin to be installed. If you don't have this installed and do not wish to, delete this section. Else set it to time zone of your choosing.

Next we configure a host-only network to be set up. The network adapter will be reconfigured in a later step to make this absolutely fail-safe.

Now to provisioning the machine. Its done using Vagrant's shell provisioner in inline mode. Everything between "<<-SHELL" and "SHELL"  is done to provision the machine as Kubernetes master node.
First thing to is bringen the machine up to date (step 0, if you like).
Step 1 then brings Docker into your machine:
  • add the docker repository to yum
  • install docker engine
  • create a docker group and add user "vagrant" to it
  • start the docker service
From now on, you can update your docker installation via yum.
echo "Box auf den neuesten Stand bringen..."
echo "bringing box up-to-date..."
sudo yum update -y

echo "Schritt 1: Docker installieren und einrichten"
echo "Step 1: install and set up Docker"
echo "============================================="
echo "...Docker Repo als yum Repo hinzufügen"
sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
echo "...Docker installieren"
sudo yum install docker-engine -y
echo "Docker group anlegen und vagrant-user zufügen..."
echo "create Docker group and add vagrant-user..."
sudo groupadd docker
sudo usermod -aG docker vagrant
sudo chkconfig docker on
sudo systemctl start docker
echo "Schritt 1 erledigt"
echo "Step 1 done"
echo "------------------"

Step 2 adds the Kubernetes repository to yum.
Note that there base-url is architecture specific.
echo "Schritt 2: Kubernetes Repo als yum Repo hinzufügen"
echo "Step 2: Add Kubernetes Repo to yum Repo definitions"
echo "=================================================="
sudo tee /etc/yum.repos.d/kubernetes.repo <<-'EOF'
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
echo "Schritt 2 erledigt"
echo "Step 2 done"
echo "------------------"

Step 3 is done to prevent iptables routing errors, specifically done on CentOS systems.
Kernel Parametes are modified by writing them to a config file for sysctl and then explicitely loading them:
echo "Schritt 3: iptables routing errors unter CentOS verhindern"
echo "prevent iptables routing errors reported under CentOS"
echo "=========================================================="
sudo tee /etc/sysctl.d/k8s.conf <<-EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
echo "Schritt 3 erledigt"
echo "Step 3 done"
echo "------------------"

Step 4 is to disable SeLinux.
At the time of this writing, Kubernetes and SeLinux do not work well together. Therefore SeLinux gets disabled temporarily and permanently:
echo "Schritt 4: SeLinux abschalten, temporär and permanent"
echo "Step 4: switch SeLinux off temporarily and permanently"
echo "====================================================="
sudo tee -a /etc/sysconfig/selinux <<-EOF
SELINUX=disabled
SELINUXTYPE=targeted
EOF
# auch temporär abschalten:
sudo setenforce 0
echo "Schritt 4 erledigt"
echo "Step 4 done"
echo "------------------"

Step 5 is done to disable CentOS swap, again temporarily and permanently. Kubernetes will not install as long as swap is on.
We do this by removing the swap entry from /etc/fstab:
echo "Schritt 5: CentoOS Swap abschalten, temporär und permanent"
echo "Step 5: turn CentOS off temporarily and permanently"
sudo swapoff -a
echo "swap permanent ausschalten:"
echo "swap-Zeile rauswerfen in temp-Datei und temp dann zurückkopieren..."
sudo grep -v "swap" /etc/fstab > temp
sudo mv temp /etc/fstab
echo "swap permanent ausgeschaltet!"
echo "Schritt 5 erledigt"
echo "Step 5 done"
echo "------------------"

Step 6 needed to be done on my system, so I included it. Maybe it is not necessary on yours.
The configuration for the cgroup driver in Kubernetes has to match the one configured in docker.
Kubernetes comes with "systemd", so we set the docker configuration accordingly (it was "cgroupfs" on my installation):
echo "Schritt 6: docker cgroup driver auf systemd ändern"
echo "Step 6: change docker cgroup driver to systemd"
echo "=================================================="
sudo tee /etc/docker/daemon.json >>-EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl restart docker
echo "Schritt 6 erledigt"
echo "Step 6 done"
echo "------------------"

Step 7 is done more or less out of a habit. I'm adding the cluster nodes to come into the /etc/hosts file.
echo "Schritt 7: /etc/hosts anpassen (das muss man pro Node machen!)"
echo "Step 7: make all other nodes known via /etc/hosts"
echo "--------------------------------------------------------------"
sudo tee -a /etc/hosts >>-EOF
192.168.2.11 cl2node1
192.168.2.12 cl2node2
EOF
echo "Schritt 7 erledigt"
echo "Step 7 done"
echo "------------------"

In Step 8, the network interface of the machine gets reconfigured due to problems that vagrant might have by setting a static IP address for a host.only network.
Assigning the desired IP via Vagrant's network definition failed for me. After reading through a lot of Stackoverloads, I ended up using the network manager cli to reconfigure the network adapter manually. This proved to work out more reliably for me:
echo "Schritt 8: network interface manuell konfigurieren"
echo "Step 8: manually configure the network adapter"
echo "=================================================="
sudo nmcli con mod "Wired connection 1" ipv4.address "192.168.2.10/24"
sudo nmcli con mod "Wired connection 1" ipv4.method "manual"
sudo nmcli con down "Wired connection 1"
sudo nmcli con up "Wired connection 1"
echo "Schritt 8 erledigt"
echo "Step 8 done"
echo "------------------"

Finally we are getting closer to installing Kubernetes itself.
In Step 9 we install the kubeadm package and start the kubelet service on our master node:
echo "Schritt 9: kubeadm installieren und starten"
echo "Step 9: install and start kubeadm"
echo "==========================================="
sudo yum install -y kubeadm
echo "kubelet Dienst starten"
sudo systemctl enable kubelet && systemctl start kubelet
echo "Schritt 9 erledigt"
echo "Step 9 done"
echo "------------------"

Step 10 just pulls down some images that kubeadm needs later on anyway:
echo "Schritt 10: schon mal die wichtigsten Images pullen"
echo "Step 10: pulling down some images for later"
echo "==================================================="
kubeadm config images pull
echo "Schritt 10 erledigt"
echo " Step 10 done"
echo "------------------"

Step 11 is the heart of the matter. We initialize the master node using the "kubeadm init" command.
If all goes well (after all we already did is should) kubeadm will output a "join" command for adding worker nodes to the master node just initialized.
Copy this command and save it somewhere. It is needed for adding worker nodes (see below).
echo "Schritt 11: Master Node initialisieren"
echo "Step 11: Initialize master node"
echo "======================================"
sudo kubeadm init --apiserver-advertise-address=192.168.2.10 --pod-network-cidr=10.244.0.0/16
echo "Schritt 11 erledigt: Achtung - den 'kubeadm join' muss man sich sichern!"
echo "Step 11 done: important - you must save the kubeadm join command for joining nodes later!"
echo "------------------"

Just two steps left!
Please note that these are preformed as user "vagrant".

Step 12 creates a ".kube" directory under the vagrant home and copies the cluster configuration into this directory.
echo "Schritt 12: Konfiguration ins Home-Verzeichnis von vagrant kopieren"
echo " Step 12: copy configuration in vagrant's home directory"
echo "==================================================================="
# Diese Schritte werden unter dem User vagrant durchgeführt
# these steps will be performed as user vagrant
sudo -i -u vagrant bash << EOF
mkdir -p /home/vagrant/.kube
sudo cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
sudo chown vagrant:vagrant /home/vagrant/.kube/config
echo "Schritt 12 erledigt"
echo "Step 12 done"
echo "-------------------"

Step 13 (the final step for a master node) installs the Flannel Overlay Network on the master node.
This network will automatically extend to all subsequently added worker nodes.
echo "Schritt 13: Overlay Netzwerk installieren (wir verwenden Flannel)"
echo "Step 13: installing overlay network (we are using Flannel here)"
echo "================================================================="
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
echo "Schritt 13 erledigt"
echo "Step 13 done"
echo "-------------------"
EOF
SHELL

Now you know what is going on behind the scenes, when you do the innocent "vagrant up" for a master node the first time.
Because this is a shell provisioner, these are all regular shell commands which you could run manually instead of having Vagrant execute them.

Let's take a look at the vagrantfile for a worker node.

Provisioning a Worker Node

The vagrantfile for a worker node is more or less identical to the master node, except that it ends after Step 10.

The only other differences are:

  • give the vm a different name: cl2node1
  • give the host a different hostname: cl2node1
  • give it a different IP: 192.168.2.11

When this is done, just switch into the correct directory, do the "vagrant up", ssh into the machine and execute the join command you have copied and saved earlier.

Disclaimer

The descriptions above worked for me every time up to the time of this writing. Nevertheless I can't take any responsibility if something might not work for you now or in the future.


See also:
  • Part I (installing "out of the box" by cloning the repo)
  • Part III (modifying the vagrantfiles and adding a 4th node)

Monday, 17 September 2018

Local Kubernetes Cluster on CentOS7 with Vagrant and VirtualBox on your Windows Laptop Part I

Updates

03.11.2018

The Vagrantfile has been updated due to changes in Kubernetes` documentation:
  • now adding Docker-CE Repository and installing Docker-CE explicitly
  • minor changes of adding Kubeadm-Repo
  • Kubernetes and Docker now using cgroupfs cgroup driver (instead of systemd)
  • New URL to install and configure Flannel Overly Network
Check out the new Vagrantfiles here or perform the steps from the updated shell provisioner manually.

Without these changes the master node didn't reach state "ready" any more, due to network problems.
Now it works like a charm again.

What this post is about

This post is about setting up a real kubernetes cluster consisting of 1 master node and as many worker nodes as you like (or your machine can spin up) within a local network on your laptop.

You can easily spin up a kubernetes minkube on your machine, but may be you prefer something a little bit closer to the real thing, like running each node on a separate VM connected via some private network within your host.

I promise that setting up a cluster of consisting of one master node and two worker nodes will be as easy as using minikube.

This post consists of three parts:
  1. Set up the cluster "out of the box" (this post)
  2. Explaining the Vagrantfiles for master and workers in detail
  3. Customizing the Vagrantfiles 

What you need up front

I'm doing this on my Windows 10 machine using VirtualBox for spinning up the VMs that constitute the cluster nodes and Vagrant to automate VirtualBox.
All nodes will be running CentOS 7

On your Windows machine, you need to have a virtualization software running. I'm using VirtualBox.
Additionally you need to have Vagrant installed.

In case you do not have these installed already, they are really easy to get. Go to their respective download sites, download the installers and install first VirtualBox, then Vagrant. I'm using this combination for years and never encountered any serious problems.

Why Vagrant?

Using Vagant, we can automate the process of provisioning the nodes.
Once we have created the "Vagrantfile" for the master node, one command, "vagrant up", will spin up VM running the master.
Each worker node we want to add, has its respective and even less complex Vagrantfile in another directory and can be started with another "vagrant up" respectivly.

All provisioning is done within the vagrantfiles for master and workers respectively.
Every step performed there, will be explained below.

Part I: Setting Up The Cluster "As Is"

Check Your Environment

As mentioned you need VirtualBox (or some other virtualization tool) and Vagrant running on your machine.
To check your environment, open a shell (I'm using Git Bash in all examples) and type "vagrant version":
Check Vagrant Version

To check your VirtualBox just open the GUI.
About VirtualBox
You can use "Check for Updates" in order to make sure you are on the latest release.

Get the Project

Open your Git Bash and create a directory for the project.
For example, let's create the directory "mycluster":
# create the "mycluster" directory:
$ mkdir /c/mycluster
$ cd /c/mycluster

Clone the project from GitHub.
This will get you the directories for the master node and tow worker nodes containing the respective Vagrantfiles.
# clone from GitHub:
$ git clone https://github.com/vkoster/k8s-local.git
Cloning into 'k8s-local'...
remote: Counting objects: 29, done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 29 (delta 10), reused 29 (delta 10), pack-reused 0
Unpacking objects: 100% (29/29), done.

Change into the "k8s-local" directory and check what you got:
# see what you got:
$ ls -al
total 24
drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ./
drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ../
drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 k8s-local/

$ cd k8s-local
$ ls -al
total 14
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 ./
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 ../
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 .git/
-rw-r--r-- 1 vkoster 1049089  96 Sep 15 23:49 .gitignore
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 cl2master/
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 cl2nfs/
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 cl2node1/
drwxr-xr-x 1 vkoster 1049089   0 Sep 15 23:49 cl2node2/
-rw-r--r-- 1 vkoster 1049089 897 Sep 15 23:49 README.md
-rw-r--r-- 1 vkoster 1049089  55 Sep 15 23:49 TODO.md

We are interested in these directories:

  • cl2master
    this is where the master node will be created
  • cl2node1 and cl2node2
    this is where the worker nodes will be created
(Ignore the cl2nfs directory for now. Its part of another post)

Create the Master Node

Let's create the master node. Change into the master node directory and do the "vagrant up":
# creating the master node:
$ cd cl2master
$ vagrant up
...
*** a lot of terminal output ***
...
    default:
    default: Your Kubernetes master has initialized successfully!
    default:
    default: To start using your cluster, you need to run the following as a regular user:
    default:
    default:   mkdir -p $HOME/.kube
    default:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    default:   sudo chown $(id -u):$(id -g) $HOME/.kube/config
    default:
    default: You should now deploy a pod network to the cluster.
    default: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    default:   https://kubernetes.io/docs/concepts/cluster-administration/addons/
    default:
    default: You can now join any number of machines by running the following on each node
    default: as root:
    default:
    default:   kubeadm join 192.168.2.10:6443 --token 14q1vx.z9nxyuc49e9v7rot --discovery-token-ca-cert-hash sha256:c9f757102120d5f776ddb05ac533ee3fb5e5006773f81953ed4753e8000da775
    default: Schritt 11 erledigt: Achtung - den 'kubeadm join' muss man sich sichern!
    default: Step 11 done: important - you must save the kubeadm join command for joining nodes later!
    default: ------------------
    default: Schritt 12: Konfiguration ins Home-Verzeichnis von vagrant kopieren
    default:  Step 12: copy configuration in vagrant's home directory
    default: ===================================================================
    default: Schritt 12 erledigt
    default: Step 12 done
    default: -------------------
    default: Schritt 13: Overlay Netzwerk installieren (wir verwenden Flannel)
    default: Step 13: installing overlay network (we are using Flannel here)
    default: =================================================================
    default: clusterrole.rbac.authorization.k8s.io/flannel created
    default: clusterrolebinding.rbac.authorization.k8s.io/flannel created
    default: serviceaccount/flannel created
    default: configmap/kube-flannel-cfg created
    default: daemonset.extensions/kube-flannel-ds created
    default: Schritt 13 erledigt
    default: Step 13 done
    default: -------------------

Now you need a little patience.
Vagrant will bring up the machine, update it, prepare it for running as a Kubernetes Master Node and finally install and run Kubernetes.
You can track what is going on by checking the steps outlined in the Vagrantfile.

During Step 11 Kubernetes will output "join command" for joining subsequent nodes to the cluster. You should copy and save this somewhere. You will need this command and the join-token to add worker nodes to the cluster later on.

Looking at the output of the last step (Step 13), you should be able to verify that the Flannel overlay network was installed successfully. When joining nodes to the cluster, the overlay network will be extended to those new nodes automatically.

Ok, you now have Kubernetes Master Node running on your machine:

  • hostname: cl2master
  • ip: 192.168.2.10
  • running in a private 192.168.2.0/24 network
Let's check if everything is working fine by ssh-ing into the VM and listing the Pods running in the "kube-system" namespace:
# see that the master node is working:
$ vagrant ssh
[vagrant@cl2master ~]$

# you are now inside of your new vagrant box...
# check the "kube-system" namespace:
$ kubectl get pods --namespace kube-system
NAME                                READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-hhmgb            1/1       Running   0          18h
coredns-78fcdf6894-j5mkx            1/1       Running   0          18h
etcd-cl2master                      1/1       Running   0          18h
kube-apiserver-cl2master            1/1       Running   0          18h
kube-controller-manager-cl2master   1/1       Running   0          18h
kube-flannel-ds-dxgc8               1/1       Running   0          18h
kube-proxy-tmtwc                    1/1       Running   0          18h
kube-scheduler-cl2master            1/1       Running   0          18h

Apart from the AGE column, your output should be something like the above.
Type "exit" to leave the box for now and find yourself in your GitBash again.

A Master Node is quite useless by its own because without taking explicite action, Kubernetes will not schedule your deployments on a Master Node. So let's join two Worker Nodes.

Create and Join Two Worker Nodes

Simple change into the "cl2node1" directory and again type "vagrant up":
# create the first worker node
$ cd ../cl2node1/
$ ls -al
total 12
drwxr-xr-x 1 vkoster 1049089    0 Sep 15 23:49 ./
drwxr-xr-x 1 vkoster 1049089    0 Sep 15 23:49 ../
-rw-r--r-- 1 vkoster 1049089 5492 Sep 15 23:49 Vagrantfile

# now the "vagrant up":
$ vagrant up
...
default: Complete!
    default: kubelet Dienst starten
    default: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
    default: Schritt 9 erledigt
    default: Step 9 done
    default: ------------------
    default: Schritt 10: schon mal die wichtigsten Images pullen
    default: Step 10: pulling down some images for later
    default: ===================================================
    default: [config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/pause:3.1
    default: [config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18
    default: [config/images] Pulled k8s.gcr.io/coredns:1.1.3
    default: Schritt 10 erledigt
    default:  Step 10 done
    default: ------------------
Setting up a Worker Node is done after completing step 10 of the Vagrantfile. You should see something like the above at the end of the terminal output. No we have to make this node join the cluster. You do this by running the join-command generated by Kubernetes while setting up the Master Node:
# joining this node to the cluster
$ sudo kubeadm join 192.168.2.10:6443 --token 14q1vx.z9nxyuc49e9v7rot --discovery-token-ca-cert-hash sha256:c9f757102120d5f776ddb05ac533ee3fb5e5006773f81953ed4753e8000da775
...
This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
You should see something like this at the end of your terminal output. At the end, Kubernetes tells us how to check whether this worked. To do this, exit the worker node vm, re-enter the Master Node directory, ssh into the box and run the "get nodes" command with kubctl:
# exit vm
$ exit

# now back in GitBash
$ cd ../cl2master/

# enter master node
$ vagrant ssh

# now inside master vm
$ kubectl get nodes
NAME        STATUS    ROLES     AGE       VERSION
cl2master   Ready     master    22h       v1.11.3
cl2node1    Ready         21m       v1.11.3
Type "exit" to leave the master and enter GitBash again.
OK, now we have a cluster consisting of one master and one worker node. Adding the second worker node is exactly the same procedure as adding the fist. Just make sure to change into the "cl2node2" directory:
# now in GitBash - cl2master directory...
$ cd ../cl2node2

# do the "vagrant up"
$ vagrant up
... lots of noise...
...
    default: Complete!
    default: kubelet Dienst starten
    default: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
    default: Schritt 9 erledigt
    default: Step 9 done
    default: ------------------
    default: Schritt 10: schon mal die wichtigsten Images pullen
    default: Step 10: pulling down some images for later
    default: ===================================================
    default: [config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.3
    default: [config/images] Pulled k8s.gcr.io/pause:3.1
    default: [config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18
    default: [config/images] Pulled k8s.gcr.io/coredns:1.1.3
    default: Schritt 10 erledigt
    default:  Step 10 done
    default: ------------------

# enter the new node via ssh to run the "Join-Command":
$ vagrant ssh

# now inside the new node...
$ sudo kubeadm join 192.168.2.10:6443 --token 14q1vx.z9nxyuc49e9v7rot --discovery-token-ca-cert-hash sha256:c9f757102120d5f776ddb05ac533ee3fb5e5006773f81953ed4753e8000da775
...
This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
Let's check again on the master node:
# exit the node2-box:
$ exit

# change into master node directory
$ cd ../cl2master/

# enter the master node via ssh
$ vagrant ssh

# now inside master node vm...
$ kubectl get nodes
NAME        STATUS    ROLES     AGE       VERSION
cl2master   Ready     master    23h       v1.11.3
cl2node1    Ready               1h        v1.11.3
cl2node2    Ready               32m       v1.11.3

Summary

Now you habe a Kubernetes cluster consisting of one master and two worker nodes running on your lokal machine. The respective VMs are configured like so:

  • master node
    • hostname: cl2master
    • ip: 192.168.2.10
  • worker node 1
    • hostname: cl2node1
    • ip: 192.168.2.11
  • worker node 2
    • hostname: cl2node2
    • ip: 192.168.2.12
The next post will discuss the Vagrantfiles for master and worker nodes respectively.

Disclaimer

The descriptions above worked for me every time up to the time of this writing. Nevertheless I can't take any responsibility if something might not work for you now or in the future.


See also: