A few months ago I wanted to play with Kubernetes, and it should look like the real world. A cluster with a master and several worker nodes. This can be done in the various clouds, but this probably costs money, so why not use VirtualBoxes with Vagrant? And I know Oracle has some nice Vagrant-buildings, ready to deploy.
But I had to login in the Oracle Container Registry and do all kind of stuff manually. But…. as of October 2019, the Oracle Container Registry does no longer require authentication for open source projects, and I noticed they changed the Vagrant-build.
This post covers the building of a three node Kubernetes cluster with Vagrant. Followed mostly the documentation of https://github.com/oracle/vagrant-boxes/tree/master/Kubernetes by the way. And as a bonus, I installed HELM and a HELM-plugin on it as a test.My environment:
– Windows 10 laptop
– Internet-access
– 16GB memory
3 Steps:
- Download and install Vagrant , Virtualbox and get the Vagrant builds from Oracle
- Create 3 VM’s with Linux and Docker on it with Vagrant.
- Install HELM
1. Download and install software and vagrant builds
- Download and install Vagrant : https://www.vagrantup.com/downloads.html
- Download and Install Virtualbox: https://www.virtualbox.org/wiki/Downloads
- Download Vagrant-boxes from Oracle site:git clone https://github.com/oracle/vagrant-boxes (and ‘cd vagrant-boxes\Kubernetes’)
2. Create 3 VM’s with Oracle Linux and Docker on it with Vagrant
Cd ……\vagrant-boxes\Kubernetes
# Vagrant up
Bringing machine ‘master’ up with ‘virtualbox’ provider…
Bringing machine ‘worker1’ up with ‘virtualbox’ provider…
Bringing machine ‘worker2’ up with ‘virtualbox’ provider…
…
This can take a while : on my laptop 35 minutes round and about.
…
worker2: This node has joined the cluster:
worker2: * Certificate signing request was sent to apiserver and a response was received.
worker2: * The Kubelet was informed of the new secure connection details.
worker2:
worker2: Run ‘kubectl get nodes’ on the master to see this node join the cluster.
worker2: /tmp/vagrant-shell: Worker node ready
==> worker2: Configuring proxy for Docker…
And there should be 3 VM’s running!!
# vagrant status
Current machine states:
master running (virtualbox)
worker1 running (virtualbox)
worker2 running (virtualbox)
Let’s login and see what we’ve got:
# vagrant ssh master
Last login: Thu Apr 18 23:55:36 2019 from 10.0.2.2
Welcome to Oracle Linux Server release 7.6 (GNU/Linux 4.14.35-1844.4.5.el7uek.x86_64)
The Oracle Linux End-User License Agreement can be viewed here:
* /usr/share/eula/eula.en_US
For additional packages, updates, documentation and community help, see:
* http://yum.oracle.com/
Check on the master node:
[vagrant@master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.vagrant.vm Ready master 33m v1.12.10+1.0.10.el7
worker1.vagrant.vm Ready <none> 13m v1.12.10+1.0.10.el7
worker2.vagrant.vm Ready <none> 8m37s v1.12.10+1.0.10.el7
[vagrant@master ~]$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:6443
KubeDNS is running at https://192.168.99.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
[vagrant@master ~]$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-6d5cc884f4-2j767 1/1 Running 0 48m
pod/coredns-6d5cc884f4-rtqwv 1/1 Running 0 46m
pod/etcd-master.vagrant.vm 1/1 Running 0 46m
pod/kube-apiserver-master.vagrant.vm 1/1 Running 1 46m
pod/kube-controller-manager-master.vagrant.vm 1/1 Running 1 46m
pod/kube-flannel-ds-rgl82 1/1 Running 0 28m
pod/kube-flannel-ds-twdsg 1/1 Running 0 48m
pod/kube-flannel-ds-xsqsn 1/1 Running 0 23m
pod/kube-proxy-f7hwk 1/1 Running 0 28m
pod/kube-proxy-mz526 1/1 Running 0 23m
pod/kube-proxy-pg6v6 1/1 Running 0 48m
pod/kube-scheduler-master.vagrant.vm 1/1 Running 1 46m
pod/kubernetes-dashboard-f6b58ff9c-rzvtg 1/1 Running 0 48m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 48m
service/kubernetes-dashboard ClusterIP 10.97.252.187 <none> 443/TCP 48m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds 3 3 3 3 3 beta.kubernetes.io/arch=amd64 48m
daemonset.apps/kube-proxy 3 3 3 3 3 <none> 48m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2 2 2 2 48m
deployment.apps/kubernetes-dashboard 1 1 1 1 48m
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-6d5cc884f4 2 2 2 48m
replicaset.apps/coredns-85d6cff8d8 0 0 0 48m
replicaset.apps/kubernetes-dashboard-f6b58ff9c 1 1 1 48m
And you’ve got a working three node Kubernetes cluster – that’s all!
3. Bonus, install HELM and a HELM plugin
First I’d like to have GIT, as root:
yum install git –y
Then installing HELM (no need for GIT yet by the way) using this doc with service-account Tiller:
[vagrant@master ~]$ sudo su –
Last login: Thu Apr 18 23:55:39 UTC 2019 on pts/0
[root@master ~]# curl -L https://git.io/get_helm.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 –:–:– 0:00:02 –:–:– 0
100 7150 100 7150 0 0 2747 0 0:00:02 0:00:02 –:–:– 5152k
Downloading https://get.helm.sh/helm-v2.16.5-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run ‘helm init’ to configure helm.
[root@master ~]# exit
logout
[vagrant@master ~]$ kubectl create serviceaccount –namespace kube-system tiller
serviceaccount/tiller created
[vagrant@master ~]$ kubectl create clusterrolebinding tiller-cluster-rule –clusterrole=cluster-admin –serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
[vagrant@master ~]$ helm init –service-account tiller
Creating /home/vagrant/.helm
Creating /home/vagrant/.helm/repository
Creating /home/vagrant/.helm/repository/cache
Creating /home/vagrant/.helm/repository/local
Creating /home/vagrant/.helm/plugins
Creating /home/vagrant/.helm/starters
Creating /home/vagrant/.helm/cache/archive
Creating /home/vagrant/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/vagrant/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure ‘allow unauthenticated users’ policy.
To prevent this, run `helm init` with the –tiller-tls-verify flag.
For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
Is there a pod with Tiller in it:
[vagrant@master ~]$ kubectl get pods –namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d5cc884f4-2j767 1/1 Running 0 60m
coredns-6d5cc884f4-rtqwv 1/1 Running 0 58m
etcd-master.vagrant.vm 1/1 Running 0 59m
kube-apiserver-master.vagrant.vm 1/1 Running 1 59m
kube-controller-manager-master.vagrant.vm 1/1 Running 1 58m
kube-flannel-ds-rgl82 1/1 Running 0 41m
kube-flannel-ds-twdsg 1/1 Running 0 60m
kube-flannel-ds-xsqsn 1/1 Running 0 36m
kube-proxy-f7hwk 1/1 Running 0 41m
kube-proxy-mz526 1/1 Running 0 36m
kube-proxy-pg6v6 1/1 Running 0 60m
kube-scheduler-master.vagrant.vm 1/1 Running 1 58m
kubernetes-dashboard-f6b58ff9c-rzvtg 1/1 Running 0 60m
tiller-deploy-6fdd7f7cfd-rfltb 1/1 Running 0 5m45s
Let’s see if HELM works, by installing a plugin from Adam Reese.
[vagrant@master ~]$ helm plugin install https://github.com/adamreese/helm-env
Print out the helm environment.
Usage:
helm env [OPTIONS]Options:
–vars-only only print environment variables
-q, –quiet don’t print headersInstalled plugin: env
What does this plugin do?
[vagrant@master ~]$ helm env
——————————————————————————————————————[Helm environment]
HELM_BIN=helm
HELM_HOME=/home/vagrant/.helm
HELM_PATH_CACHE=/home/vagrant/.helm/repository/cache
HELM_PATH_LOCAL_REPOSITORY=/home/vagrant/.helm/repository/local
HELM_PATH_REPOSITORY_FILE=/home/vagrant/.helm/repository/repositories.yaml
HELM_PATH_REPOSITORY=/home/vagrant/.helm/repository
HELM_PATH_STARTER=/home/vagrant/.helm/starters
HELM_PLUGIN_DIR=/home/vagrant/.helm/plugins/helm-env
HELM_PLUGIN=/home/vagrant/.helm/plugins
HELM_PLUGIN_NAME=env
KUBE_REPO_PREFIX=container-registry.oracle.com/kubernetes
TILLER_HOST=
TILLER_NAMESPACE=kube-system————————————————————————————————————————-[kubectl config]
current-context: kubernetes-admin@kubernetes
server: https://192.168.99.100:6443
And that’s it. You should be ready to go.
Regardz..
Resources:
-
- Download and install Vagrant : https://www.vagrantup.com/downloads.html
- Download and Install Virtualbox: https://www.virtualbox.org/wiki/Downloads
- Download Vagrant-boxes from Oracle site: https://github.com/oracle/vagrant-boxes
- Documentation installation: https://github.com/oracle/vagrant-boxes/tree/master/Kubernetes
- HELM install: https://helm.sh/docs/using_helm/#installing-helm
- Env plugin Adam Reese: https://github.com/adamreese/helm-env
unable to install helm and also how to open dashboard ?
F0515 13:48:35.082140 1 storage_decorator.go:57] Unable to create storage backend: config (&{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true true 1000 0xc000220090 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: connect: connection refused)
Doesn’t sound familiair to me at first sight. Something may have changed at github-version. Have to investigate. Looks like an issue like this: https://github.com/kubernetes/kubernetes/issues/72102 “kube-apiserver 1.13.x refuses to work when first etcd-server is not available”. But I will rerun my post – when time is on my side – and see if can get the same results. Regards.