Part II: Explaining the Vagrantfile for provisioning the Cluster Nodes
Part I of this blog was about setting up our Kubernetes cluster "out of the box", i.e. without customizing the Vagrantfiles.
This part will explain the Vagrantfiles in detail. Every single step will be explained, so that you are ready for part III.
Part III will then explain how to customize certain parts of the Vagrantfile, namely the hostnames and IP addresses of your nodes and how to add additional nodes to the cluster.
This part will explain the Vagrantfiles in detail. Every single step will be explained, so that you are ready for part III.
Part III will then explain how to customize certain parts of the Vagrantfile, namely the hostnames and IP addresses of your nodes and how to add additional nodes to the cluster.
Updates
03.11.2018
ToDo: Changes to the Vagrantfile have to be included here.Vagrantfiles for Master and Worker
As you have seen in Part I, every node is started from its respective Vagrantfile located in its own directory.
This means that we have two types of Vagantfiles: a Vagrantfile to provision the master node and Vagrantfiles to provision each worker node respectively.
Vagrant offers different provisioners like Ansible or Puppet. Here I'm using the Bash provisioner, which means that you may also run the commands manually in a Bash shell instead of using Vagrant.
This means that we have two types of Vagantfiles: a Vagrantfile to provision the master node and Vagrantfiles to provision each worker node respectively.
Vagrant offers different provisioners like Ansible or Puppet. Here I'm using the Bash provisioner, which means that you may also run the commands manually in a Bash shell instead of using Vagrant.
Provisioning the Master Node
Let's take a look at the Vagrantfile of the master node, located in the cl2master directory.
The first section has little to do with Kubernetes and is more or less self-explanatory.
I will cover this quickly:
-*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure("2") do |config| config.vm.box = "geerlingguy/centos7" # Der Box einen Namen geben # ...und Memory und CPU für Kubernetes hochsetzen # Assign a name to this box # ...define momory and number of CPUs config.vm.provider "virtualbox" do |v| v.name = "cl2master" v.memory = 2048 v.cpus = 2 end # Hostname definieren # Set Hostname config.vm.hostname = "cl2master" # Die korrekte Zeitone einstellen: # Set local timezone if Vagrant.has_plugin?("vagrant-timezone") config.timezone.value = "Europe/Berlin" end # Create a private network, which allows host-only access to the machine # using a specific IP. config.vm.network :private_network, ip: "192.168.2.10", auto_config: false # Enable provisioning with a shell script. Additional provisioners such as # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the # documentation for more information about their specific syntax and use.
We define "geerlingguy/centos7" as our Vagrant Basebox. Everything we do from here, will be based on this image. I'm using this image since a couple of years, every time I need virtual machine based on CentOS 7. It is very popular within the vagrant community. Find everything you need to know about this box here.
The next section gives a name to the virtual machine and defines memory and cpu requirements.
Then a host name is given to the machine and its time zone is set to where I live.
One remark here: this requires a the vagrant timezone plugin to be installed. If you don't have this installed and do not wish to, delete this section. Else set it to time zone of your choosing.
Next we configure a host-only network to be set up. The network adapter will be reconfigured in a later step to make this absolutely fail-safe.
Now to provisioning the machine. Its done using Vagrant's shell provisioner in inline mode. Everything between "<<-SHELL" and "SHELL" is done to provision the machine as Kubernetes master node.
First thing to is bringen the machine up to date (step 0, if you like).
Step 1 then brings Docker into your machine:
- add the docker repository to yum
- install docker engine
- create a docker group and add user "vagrant" to it
- start the docker service
From now on, you can update your docker installation via yum.
echo "Box auf den neuesten Stand bringen..." echo "bringing box up-to-date..." sudo yum update -y echo "Schritt 1: Docker installieren und einrichten" echo "Step 1: install and set up Docker" echo "=============================================" echo "...Docker Repo als yum Repo hinzufügen" sudo tee /etc/yum.repos.d/docker.repo <<-'EOF' [dockerrepo] name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/7/ enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg EOF echo "...Docker installieren" sudo yum install docker-engine -y echo "Docker group anlegen und vagrant-user zufügen..." echo "create Docker group and add vagrant-user..." sudo groupadd docker sudo usermod -aG docker vagrant sudo chkconfig docker on sudo systemctl start docker echo "Schritt 1 erledigt" echo "Step 1 done" echo "------------------"
Step 2 adds the Kubernetes repository to yum.
Note that there base-url is architecture specific.
echo "Schritt 2: Kubernetes Repo als yum Repo hinzufügen"Note that there base-url is architecture specific.
echo "Step 2: Add Kubernetes Repo to yum Repo definitions" echo "==================================================" sudo tee /etc/yum.repos.d/kubernetes.repo <<-'EOF' [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF echo "Schritt 2 erledigt" echo "Step 2 done" echo "------------------"
Step 3 is done to prevent iptables routing errors, specifically done on CentOS systems.
Kernel Parametes are modified by writing them to a config file for sysctl and then explicitely loading them:
echo "Schritt 3: iptables routing errors unter CentOS verhindern"
Kernel Parametes are modified by writing them to a config file for sysctl and then explicitely loading them:
echo "Schritt 3: iptables routing errors unter CentOS verhindern"
echo "prevent iptables routing errors reported under CentOS" echo "==========================================================" sudo tee /etc/sysctl.d/k8s.conf <<-EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system echo "Schritt 3 erledigt" echo "Step 3 done" echo "------------------"
Step 4 is to disable SeLinux.
At the time of this writing, Kubernetes and SeLinux do not work well together. Therefore SeLinux gets disabled temporarily and permanently:
echo "Schritt 4: SeLinux abschalten, temporär and permanent" echo "Step 4: switch SeLinux off temporarily and permanently" echo "=====================================================" sudo tee -a /etc/sysconfig/selinux <<-EOF SELINUX=disabled SELINUXTYPE=targeted EOF # auch temporär abschalten: sudo setenforce 0 echo "Schritt 4 erledigt" echo "Step 4 done" echo "------------------"
Step 5 is done to disable CentOS swap, again temporarily and permanently. Kubernetes will not install as long as swap is on.
We do this by removing the swap entry from /etc/fstab:
echo "Schritt 5: CentoOS Swap abschalten, temporär und permanent" echo "Step 5: turn CentOS off temporarily and permanently" sudo swapoff -a echo "swap permanent ausschalten:" echo "swap-Zeile rauswerfen in temp-Datei und temp dann zurückkopieren..." sudo grep -v "swap" /etc/fstab > temp sudo mv temp /etc/fstab echo "swap permanent ausgeschaltet!" echo "Schritt 5 erledigt" echo "Step 5 done" echo "------------------"
Step 6 needed to be done on my system, so I included it. Maybe it is not necessary on yours.
The configuration for the cgroup driver in Kubernetes has to match the one configured in docker.
Kubernetes comes with "systemd", so we set the docker configuration accordingly (it was "cgroupfs" on my installation):
echo "Schritt 6: docker cgroup driver auf systemd ändern" echo "Step 6: change docker cgroup driver to systemd" echo "==================================================" sudo tee /etc/docker/daemon.json >>-EOF { "exec-opts": ["native.cgroupdriver=systemd"] } EOF sudo systemctl restart docker echo "Schritt 6 erledigt" echo "Step 6 done" echo "------------------"
Step 7 is done more or less out of a habit. I'm adding the cluster nodes to come into the /etc/hosts file.
echo "Schritt 7: /etc/hosts anpassen (das muss man pro Node machen!)" echo "Step 7: make all other nodes known via /etc/hosts" echo "--------------------------------------------------------------" sudo tee -a /etc/hosts >>-EOF 192.168.2.11 cl2node1 192.168.2.12 cl2node2 EOF echo "Schritt 7 erledigt" echo "Step 7 done" echo "------------------"
In Step 8, the network interface of the machine gets reconfigured due to problems that vagrant might have by setting a static IP address for a host.only network.
Assigning the desired IP via Vagrant's network definition failed for me. After reading through a lot of Stackoverloads, I ended up using the network manager cli to reconfigure the network adapter manually. This proved to work out more reliably for me:
echo "Schritt 8: network interface manuell konfigurieren" echo "Step 8: manually configure the network adapter" echo "==================================================" sudo nmcli con mod "Wired connection 1" ipv4.address "192.168.2.10/24" sudo nmcli con mod "Wired connection 1" ipv4.method "manual" sudo nmcli con down "Wired connection 1" sudo nmcli con up "Wired connection 1" echo "Schritt 8 erledigt" echo "Step 8 done" echo "------------------"
Finally we are getting closer to installing Kubernetes itself.
In Step 9 we install the kubeadm package and start the kubelet service on our master node:
echo "Schritt 9: kubeadm installieren und starten" echo "Step 9: install and start kubeadm" echo "===========================================" sudo yum install -y kubeadm echo "kubelet Dienst starten" sudo systemctl enable kubelet && systemctl start kubelet echo "Schritt 9 erledigt" echo "Step 9 done" echo "------------------"
Step 10 just pulls down some images that kubeadm needs later on anyway:
echo "Schritt 10: schon mal die wichtigsten Images pullen" echo "Step 10: pulling down some images for later" echo "===================================================" kubeadm config images pull echo "Schritt 10 erledigt" echo " Step 10 done" echo "------------------"
Step 11 is the heart of the matter. We initialize the master node using the "kubeadm init" command.
If all goes well (after all we already did is should) kubeadm will output a "join" command for adding worker nodes to the master node just initialized.
Copy this command and save it somewhere. It is needed for adding worker nodes (see below).
echo "Schritt 11: Master Node initialisieren" echo "Step 11: Initialize master node" echo "======================================" sudo kubeadm init --apiserver-advertise-address=192.168.2.10 --pod-network-cidr=10.244.0.0/16 echo "Schritt 11 erledigt: Achtung - den 'kubeadm join' muss man sich sichern!" echo "Step 11 done: important - you must save the kubeadm join command for joining nodes later!" echo "------------------"
Just two steps left!
Please note that these are preformed as user "vagrant".
Step 12 creates a ".kube" directory under the vagrant home and copies the cluster configuration into this directory.
echo "Schritt 12: Konfiguration ins Home-Verzeichnis von vagrant kopieren" echo " Step 12: copy configuration in vagrant's home directory" echo "===================================================================" # Diese Schritte werden unter dem User vagrant durchgeführt # these steps will be performed as user vagrant sudo -i -u vagrant bash << EOF mkdir -p /home/vagrant/.kube sudo cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config sudo chown vagrant:vagrant /home/vagrant/.kube/config echo "Schritt 12 erledigt" echo "Step 12 done" echo "-------------------"
Step 13 (the final step for a master node) installs the Flannel Overlay Network on the master node.
This network will automatically extend to all subsequently added worker nodes.
echo "Schritt 13: Overlay Netzwerk installieren (wir verwenden Flannel)" echo "Step 13: installing overlay network (we are using Flannel here)" echo "=================================================================" kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml echo "Schritt 13 erledigt" echo "Step 13 done" echo "-------------------" EOF SHELL
Now you know what is going on behind the scenes, when you do the innocent "vagrant up" for a master node the first time.
Because this is a shell provisioner, these are all regular shell commands which you could run manually instead of having Vagrant execute them.
Let's take a look at the vagrantfile for a worker node.
Provisioning a Worker Node
The vagrantfile for a worker node is more or less identical to the master node, except that it ends after Step 10.The only other differences are:
- give the vm a different name: cl2node1
- give the host a different hostname: cl2node1
- give it a different IP: 192.168.2.11
When this is done, just switch into the correct directory, do the "vagrant up", ssh into the machine and execute the join command you have copied and saved earlier.
Disclaimer
The descriptions above worked for me every time up to the time of this writing. Nevertheless I can't take any responsibility if something might not work for you now or in the future.
See also:
- Part I (installing "out of the box" by cloning the repo)
- Part III (modifying the vagrantfiles and adding a 4th node)
Thank you for any other fantastic article. Where else could anyone get that type of info in such a perfect method of writing? I've a presentation next week, and I am on the search for such info.
ReplyDeleteI'm impressed, I must say. Seldom do I come across a blog that's equally educative and entertaining, and let me tell you, you've hit the nail on the head. The issue is something that not enough folks are speaking intelligently about. I'm very happy that I stumbled across this during my search for something relating to this.
ReplyDeleteWow, that's what I was looking for, what a stuff! existing here at this website, thanks admin of this site.
ReplyDelete