A few months ago I wanted to play with Kubernetes, and it should look like the real world. A cluster with a master and several worker nodes. This can be done in the various clouds, but this probably costs money, so why not use VirtualBoxes with Vagrant? And I know Oracle has some nice Vagrant-buildings, ready to deploy.

But I had to login in the Oracle Container Registry and do all kind of stuff manually. But….  as of October 2019, the Oracle Container Registry does no longer require authentication for open source projects, and I noticed they changed the Vagrant-build.

This post covers the building of a three node Kubernetes cluster with Vagrant. Followed mostly the documentation of https://github.com/oracle/vagrant-boxes/tree/master/Kubernetes by the way. And as a bonus, I installed HELM and a HELM-plugin on it as a test.My environment:

– Windows 10 laptop

– Internet-access

– 16GB memory

3 Steps:

  1. Download and install Vagrant , Virtualbox and get the Vagrant builds from Oracle
  2. Create 3 VM’s with Linux and Docker on it with Vagrant.
  3. Install HELM

1. Download and install software and vagrant builds

2. Create 3 VM’s with Oracle Linux and Docker on it with Vagrant

Cd ……\vagrant-boxes\Kubernetes

# Vagrant up

Bringing machine ‘master’ up with ‘virtualbox’ provider…

Bringing machine ‘worker1’ up with ‘virtualbox’ provider…

Bringing machine ‘worker2’ up with ‘virtualbox’ provider…

This can take a while : on my laptop 35 minutes round and about.

worker2: This node has joined the cluster:
worker2: * Certificate signing request was sent to apiserver and a response was received.
worker2: * The Kubelet was informed of the new secure connection details.
worker2: Run ‘kubectl get nodes’ on the master to see this node join the cluster.
worker2: /tmp/vagrant-shell: Worker node ready
==> worker2: Configuring proxy for Docker…

And there should be 3 VM’s running!!

# vagrant status

Current machine states:

master running (virtualbox)

worker1 running (virtualbox)

worker2 running (virtualbox)

Let’s login and see what we’ve got:

# vagrant ssh master

Last login: Thu Apr 18 23:55:36 2019 from

Welcome to Oracle Linux Server release 7.6 (GNU/Linux 4.14.35-1844.4.5.el7uek.x86_64)

The Oracle Linux End-User License Agreement can be viewed here:

* /usr/share/eula/eula.en_US

For additional packages, updates, documentation and community help, see:

* http://yum.oracle.com/


Check on the master node:

[vagrant@master ~]$ kubectl get nodes

NAME                 STATUS   ROLES    AGE     VERSION
master.vagrant.vm    Ready    master   33m     v1.12.10+1.0.10.el7
worker1.vagrant.vm   Ready    <none>   13m     v1.12.10+1.0.10.el7
worker2.vagrant.vm   Ready    <none>   8m37s   v1.12.10+1.0.10.el7


[vagrant@master ~]$ kubectl cluster-info

Kubernetes master is running at
KubeDNS is running at


[vagrant@master ~]$ kubectl get all -n kube-system

NAME                                            READY   STATUS    RESTARTS   AGE
pod/coredns-6d5cc884f4-2j767                    1/1     Running   0          48m
pod/coredns-6d5cc884f4-rtqwv                    1/1     Running   0          46m
pod/etcd-master.vagrant.vm                      1/1     Running   0          46m
pod/kube-apiserver-master.vagrant.vm            1/1     Running   1          46m
pod/kube-controller-manager-master.vagrant.vm   1/1     Running   1          46m
pod/kube-flannel-ds-rgl82                       1/1     Running   0          28m
pod/kube-flannel-ds-twdsg                       1/1     Running   0          48m
pod/kube-flannel-ds-xsqsn                       1/1     Running   0          23m
pod/kube-proxy-f7hwk                            1/1     Running   0          28m
pod/kube-proxy-mz526                            1/1     Running   0          23m
pod/kube-proxy-pg6v6                            1/1     Running   0          48m
pod/kube-scheduler-master.vagrant.vm            1/1     Running   1          46m
pod/kubernetes-dashboard-f6b58ff9c-rzvtg        1/1     Running   0          48m


NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/kube-dns               ClusterIP      <none>        53/UDP,53/TCP   48m
service/kubernetes-dashboard   ClusterIP   <none>        443/TCP         48m


NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                   AGE
daemonset.apps/kube-flannel-ds   3         3         3       3            3           beta.kubernetes.io/arch=amd64   48m
daemonset.apps/kube-proxy        3         3         3       3            3           <none>                          48m


NAME                                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns                2         2         2            2           48m
deployment.apps/kubernetes-dashboard   1         1         1            1           48m


NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-6d5cc884f4               2         2         2       48m
replicaset.apps/coredns-85d6cff8d8               0         0         0       48m
replicaset.apps/kubernetes-dashboard-f6b58ff9c   1         1         1       48m

And you’ve got a working three node Kubernetes cluster – that’s all!

3. Bonus, install HELM and a HELM plugin

First I’d like to have GIT, as root:

yum install git –y

Then installing HELM (no need for GIT yet by the way) using  this doc with service-account Tiller:

[vagrant@master ~]$ sudo su –
Last login: Thu Apr 18 23:55:39 UTC 2019 on pts/0

[root@master ~]# curl -L https://git.io/get_helm.sh | bash
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed
0     0    0     0    0     0      0      0 –:–:–  0:00:02 –:–:–     0
100  7150  100  7150    0     0   2747      0  0:00:02  0:00:02 –:–:– 5152k

Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run ‘helm init’ to configure helm.


[root@master ~]# exit
[vagrant@master ~]$ kubectl create serviceaccount –namespace kube-system tiller
serviceaccount/tiller created
[vagrant@master ~]$ kubectl create clusterrolebinding tiller-cluster-rule –clusterrole=cluster-admin –serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
[vagrant@master ~]$ helm init –service-account tiller
Creating /home/vagrant/.helm
Creating /home/vagrant/.helm/repository
Creating /home/vagrant/.helm/repository/cache
Creating /home/vagrant/.helm/repository/local
Creating /home/vagrant/.helm/plugins
Creating /home/vagrant/.helm/starters
Creating /home/vagrant/.helm/cache/archive
Creating /home/vagrant/.helm/repository/repositories.yaml
Adding stable repo with URL:

Adding local repo with URL:

$HELM_HOME has been configured at /home/vagrant/.helm.


Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.


Please note: by default, Tiller is deployed with an insecure ‘allow unauthenticated users’ policy.
To prevent this, run `helm init` with the –tiller-tls-verify flag.
For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/

Is there a pod with Tiller in it:

[vagrant@master ~]$ kubectl get pods –namespace kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-6d5cc884f4-2j767                    1/1     Running   0          60m
coredns-6d5cc884f4-rtqwv                    1/1     Running   0          58m
etcd-master.vagrant.vm                      1/1     Running   0          59m
kube-apiserver-master.vagrant.vm            1/1     Running   1          59m
kube-controller-manager-master.vagrant.vm   1/1     Running   1          58m
kube-flannel-ds-rgl82                       1/1     Running   0          41m
kube-flannel-ds-twdsg                       1/1     Running   0          60m
kube-flannel-ds-xsqsn                       1/1     Running   0          36m
kube-proxy-f7hwk                            1/1     Running   0          41m
kube-proxy-mz526                            1/1     Running   0          36m
kube-proxy-pg6v6                            1/1     Running   0          60m
kube-scheduler-master.vagrant.vm            1/1     Running   1          58m
kubernetes-dashboard-f6b58ff9c-rzvtg        1/1     Running   0          60m

tiller-deploy-6fdd7f7cfd-rfltb              1/1     Running   0          5m45s

Let’s see if HELM works, by installing a plugin from Adam Reese.

[vagrant@master ~]$ helm plugin install https://github.com/adamreese/helm-env

Print out the helm environment.

helm env [OPTIONS]

–vars-only      only print environment variables
-q, –quiet          don’t print headers

Installed plugin: env

What does this plugin do?

[vagrant@master ~]$ helm env

——————————————————————————————————————[Helm environment]


————————————————————————————————————————-[kubectl config]

current-context: kubernetes-admin@kubernetes

And that’s it. You should be ready to go.