Updates
03.11.2018
The Vagrantfile has been updated due to changes in Kubernetes` documentation:
- now adding Docker-CE Repository and installing Docker-CE explicitly
- minor changes of adding Kubeadm-Repo
- Kubernetes and Docker now using cgroupfs cgroup driver (instead of systemd)
- New URL to install and configure Flannel Overly Network
Check out the new Vagrantfiles here or perform the steps from the updated shell provisioner manually.
Without these changes the master node didn't reach state "ready" any more, due to network problems.
Now it works like a charm again.
What this post is about
This post is about setting up a real kubernetes cluster consisting of 1 master node and as many worker nodes as you like (or your machine can spin up) within a local network on your laptop.
You can easily spin up a kubernetes minkube on your machine, but may be you prefer something a little bit closer to the real thing, like running each node on a separate VM connected via some private network within your host.
I promise that setting up a cluster of consisting of one master node and two worker nodes will be as easy as using minikube.
This post consists of three parts:
You can easily spin up a kubernetes minkube on your machine, but may be you prefer something a little bit closer to the real thing, like running each node on a separate VM connected via some private network within your host.
I promise that setting up a cluster of consisting of one master node and two worker nodes will be as easy as using minikube.
This post consists of three parts:
- Set up the cluster "out of the box" (this post)
- Explaining the Vagrantfiles for master and workers in detail
- Customizing the Vagrantfiles
What you need up front
I'm doing this on my Windows 10 machine using VirtualBox for spinning up the VMs that constitute the cluster nodes and Vagrant to automate VirtualBox.
All nodes will be running CentOS 7
On your Windows machine, you need to have a virtualization software running. I'm using VirtualBox.
Additionally you need to have Vagrant installed.
In case you do not have these installed already, they are really easy to get. Go to their respective download sites, download the installers and install first VirtualBox, then Vagrant. I'm using this combination for years and never encountered any serious problems.
Why Vagrant?
Using Vagant, we can automate the process of provisioning the nodes.
Once we have created the "Vagrantfile" for the master node, one command, "vagrant up", will spin up VM running the master.
Each worker node we want to add, has its respective and even less complex Vagrantfile in another directory and can be started with another "vagrant up" respectivly.
All provisioning is done within the vagrantfiles for master and workers respectively.
Every step performed there, will be explained below.
Clone the project from GitHub.
This will get you the directories for the master node and tow worker nodes containing the respective Vagrantfiles.
Change into the "k8s-local" directory and check what you got:
We are interested in these directories:
Apart from the AGE column, your output should be something like the above.
Type "exit" to leave the box for now and find yourself in your GitBash again.
Each worker node we want to add, has its respective and even less complex Vagrantfile in another directory and can be started with another "vagrant up" respectivly.
All provisioning is done within the vagrantfiles for master and workers respectively.
Every step performed there, will be explained below.
Part I: Setting Up The Cluster "As Is"
Check Your Environment
As mentioned you need VirtualBox (or some other virtualization tool) and Vagrant running on your machine.
To check your environment, open a shell (I'm using Git Bash in all examples) and type "vagrant version":
Check Vagrant Version |
To check your VirtualBox just open the GUI.
About VirtualBox |
You can use "Check for Updates" in order to make sure you are on the latest release.
Get the Project
Open your Git Bash and create a directory for the project.
For example, let's create the directory "mycluster":
# create the "mycluster" directory: $ mkdir /c/mycluster $ cd /c/mycluster
Clone the project from GitHub.
This will get you the directories for the master node and tow worker nodes containing the respective Vagrantfiles.
# clone from GitHub: $ git clone https://github.com/vkoster/k8s-local.git Cloning into 'k8s-local'... remote: Counting objects: 29, done. remote: Compressing objects: 100% (14/14), done. remote: Total 29 (delta 10), reused 29 (delta 10), pack-reused 0 Unpacking objects: 100% (29/29), done.
Change into the "k8s-local" directory and check what you got:
# see what you got: $ ls -al total 24 drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ./ drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ../ drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 k8s-local/ $ cd k8s-local $ ls -al total 14 drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ./ drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ../ drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 .git/ -rw-r--r-- 1 vkoster 1049089 96 Sep 15 23:49 .gitignore drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 cl2master/ drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 cl2nfs/ drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 cl2node1/ drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 cl2node2/ -rw-r--r-- 1 vkoster 1049089 897 Sep 15 23:49 README.md -rw-r--r-- 1 vkoster 1049089 55 Sep 15 23:49 TODO.md
We are interested in these directories:
- cl2master
this is where the master node will be created - cl2node1 and cl2node2
this is where the worker nodes will be created
(Ignore the cl2nfs directory for now. Its part of another post)
Create the Master Node
Let's create the master node. Change into the master node directory and do the "vagrant up":
# creating the master node: $ cd cl2master $ vagrant up ... *** a lot of terminal output *** ... default: default: Your Kubernetes master has initialized successfully! default: default: To start using your cluster, you need to run the following as a regular user: default: default: mkdir -p $HOME/.kube default: sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config default: sudo chown $(id -u):$(id -g) $HOME/.kube/config default: default: You should now deploy a pod network to the cluster. default: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: default: https://kubernetes.io/docs/concepts/cluster-administration/addons/ default: default: You can now join any number of machines by running the following on each node default: as root: default: default: kubeadm join 192.168.2.10:6443 --token 14q1vx.z9nxyuc49e9v7rot --discovery-token-ca-cert-hash sha256:c9f757102120d5f776ddb05ac533ee3fb5e5006773f81953ed4753e8000da775 default: Schritt 11 erledigt: Achtung - den 'kubeadm join' muss man sich sichern! default: Step 11 done: important - you must save the kubeadm join command for joining nodes later! default: ------------------ default: Schritt 12: Konfiguration ins Home-Verzeichnis von vagrant kopieren default: Step 12: copy configuration in vagrant's home directory default: =================================================================== default: Schritt 12 erledigt default: Step 12 done default: ------------------- default: Schritt 13: Overlay Netzwerk installieren (wir verwenden Flannel) default: Step 13: installing overlay network (we are using Flannel here) default: ================================================================= default: clusterrole.rbac.authorization.k8s.io/flannel created default: clusterrolebinding.rbac.authorization.k8s.io/flannel created default: serviceaccount/flannel created default: configmap/kube-flannel-cfg created default: daemonset.extensions/kube-flannel-ds created default: Schritt 13 erledigt default: Step 13 done default: -------------------
Now you need a little patience.
Vagrant will bring up the machine, update it, prepare it for running as a Kubernetes Master Node and finally install and run Kubernetes.
You can track what is going on by checking the steps outlined in the Vagrantfile.
During Step 11 Kubernetes will output "join command" for joining subsequent nodes to the cluster. You should copy and save this somewhere. You will need this command and the join-token to add worker nodes to the cluster later on.
Looking at the output of the last step (Step 13), you should be able to verify that the Flannel overlay network was installed successfully. When joining nodes to the cluster, the overlay network will be extended to those new nodes automatically.
Ok, you now have Kubernetes Master Node running on your machine:
Vagrant will bring up the machine, update it, prepare it for running as a Kubernetes Master Node and finally install and run Kubernetes.
You can track what is going on by checking the steps outlined in the Vagrantfile.
During Step 11 Kubernetes will output "join command" for joining subsequent nodes to the cluster. You should copy and save this somewhere. You will need this command and the join-token to add worker nodes to the cluster later on.
Looking at the output of the last step (Step 13), you should be able to verify that the Flannel overlay network was installed successfully. When joining nodes to the cluster, the overlay network will be extended to those new nodes automatically.
Ok, you now have Kubernetes Master Node running on your machine:
- hostname: cl2master
- ip: 192.168.2.10
- running in a private 192.168.2.0/24 network
Let's check if everything is working fine by ssh-ing into the VM and listing the Pods running in the "kube-system" namespace:
# see that the master node is working: $ vagrant ssh [vagrant@cl2master ~]$ # you are now inside of your new vagrant box... # check the "kube-system" namespace: $ kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-hhmgb 1/1 Running 0 18h coredns-78fcdf6894-j5mkx 1/1 Running 0 18h etcd-cl2master 1/1 Running 0 18h kube-apiserver-cl2master 1/1 Running 0 18h kube-controller-manager-cl2master 1/1 Running 0 18h kube-flannel-ds-dxgc8 1/1 Running 0 18h kube-proxy-tmtwc 1/1 Running 0 18h kube-scheduler-cl2master 1/1 Running 0 18h
Apart from the AGE column, your output should be something like the above.
Type "exit" to leave the box for now and find yourself in your GitBash again.
A Master Node is quite useless by its own because without taking explicite action, Kubernetes will not schedule your deployments on a Master Node. So let's join two Worker Nodes.
Create and Join Two Worker Nodes
Simple change into the "cl2node1" directory and again type "vagrant up":
# create the first worker node $ cd ../cl2node1/ $ ls -al total 12 drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ./ drwxr-xr-x 1 vkoster 1049089 0 Sep 15 23:49 ../ -rw-r--r-- 1 vkoster 1049089 5492 Sep 15 23:49 Vagrantfile # now the "vagrant up": $ vagrant up ... default: Complete! default: kubelet Dienst starten default: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. default: Schritt 9 erledigt default: Step 9 done default: ------------------ default: Schritt 10: schon mal die wichtigsten Images pullen default: Step 10: pulling down some images for later default: =================================================== default: [config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.3 default: [config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.3 default: [config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.3 default: [config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.3 default: [config/images] Pulled k8s.gcr.io/pause:3.1 default: [config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18 default: [config/images] Pulled k8s.gcr.io/coredns:1.1.3 default: Schritt 10 erledigt default: Step 10 done default: ------------------
Setting up a Worker Node is done after completing step 10 of the Vagrantfile.
You should see something like the above at the end of the terminal output.
No we have to make this node join the cluster.
You do this by running the join-command generated by Kubernetes while setting up the Master Node:
# joining this node to the cluster $ sudo kubeadm join 192.168.2.10:6443 --token 14q1vx.z9nxyuc49e9v7rot --discovery-token-ca-cert-hash sha256:c9f757102120d5f776ddb05ac533ee3fb5e5006773f81953ed4753e8000da775 ... This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
You should see something like this at the end of your terminal output.
At the end, Kubernetes tells us how to check whether this worked.
To do this, exit the worker node vm, re-enter the Master Node directory, ssh into the box and run the "get nodes" command with kubctl:
# exit vm $ exit # now back in GitBash $ cd ../cl2master/ # enter master node $ vagrant ssh # now inside master vm $ kubectl get nodes NAME STATUS ROLES AGE VERSION cl2master Ready master 22h v1.11.3 cl2node1 Ready21m v1.11.3
Type "exit" to leave the master and enter GitBash again.
OK, now we have a cluster consisting of one master and one worker node. Adding the second worker node is exactly the same procedure as adding the fist. Just make sure to change into the "cl2node2" directory:
OK, now we have a cluster consisting of one master and one worker node. Adding the second worker node is exactly the same procedure as adding the fist. Just make sure to change into the "cl2node2" directory:
# now in GitBash - cl2master directory... $ cd ../cl2node2 # do the "vagrant up" $ vagrant up ... lots of noise... ... default: Complete! default: kubelet Dienst starten default: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. default: Schritt 9 erledigt default: Step 9 done default: ------------------ default: Schritt 10: schon mal die wichtigsten Images pullen default: Step 10: pulling down some images for later default: =================================================== default: [config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.3 default: [config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.3 default: [config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.3 default: [config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.3 default: [config/images] Pulled k8s.gcr.io/pause:3.1 default: [config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18 default: [config/images] Pulled k8s.gcr.io/coredns:1.1.3 default: Schritt 10 erledigt default: Step 10 done default: ------------------ # enter the new node via ssh to run the "Join-Command": $ vagrant ssh # now inside the new node... $ sudo kubeadm join 192.168.2.10:6443 --token 14q1vx.z9nxyuc49e9v7rot --discovery-token-ca-cert-hash sha256:c9f757102120d5f776ddb05ac533ee3fb5e5006773f81953ed4753e8000da775 ... This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
Let's check again on the master node:
# exit the node2-box: $ exit # change into master node directory $ cd ../cl2master/ # enter the master node via ssh $ vagrant ssh # now inside master node vm... $ kubectl get nodes NAME STATUS ROLES AGE VERSION cl2master Ready master 23h v1.11.3 cl2node1 Ready1h v1.11.3 cl2node2 Ready 32m v1.11.3
Summary
Now you habe a Kubernetes cluster consisting of one master and two worker nodes running on your lokal machine.
The respective VMs are configured like so:
- master node
- hostname: cl2master
- ip: 192.168.2.10
- worker node 1
- hostname: cl2node1
- ip: 192.168.2.11
- worker node 2
- hostname: cl2node2
- ip: 192.168.2.12
The next post will discuss the Vagrantfiles for master and worker nodes respectively.
Disclaimer
The descriptions above worked for me every time up to the time of this writing. Nevertheless I can't take any responsibility if something might not work for you now or in the future.
See also:
See also:
- Part II (Setup Steps explained)