Note: You must have VirtualBox and Vagrant configured at this point
Open a terminal application (on Windows use PowerShell). All commands in this guide are executed from the terminal.
Download this github repository and cd into the virtualbox
folder
git clone https://github.com/kodekloudhub/certified-kubernetes-administrator-course.git
CD into the virtualbox directory
cd kubeadm-clusters/virtualbox
Run Vagrant up to create the virtual machines.
vagrant up
Bridged networking makes the VMs appear as hosts directly on your local network. This means that you will be able to use your own browser to connect to any NodePort services you create in the cluster.
If your workstation has more than one network interface capable of creating a bridge, Vagrant may stop and ask you which one to use if it cannot determine the best interface itself. Now it should work this out and not have to ask you, however if it does not, then the “best” interface is the one used to connect to your broadband router. On laptops, this would normally be the Wi-Fi adapter which should be easliy identifiable in the list. The example below is from a Windows desktop computer with a wired network adapter.
Which of the two choices do you think is correct?
==> controlplane: Available bridged network interfaces:
1) Intel(R) Ethernet Connection (2) I219-V
2) Hyper-V Virtual Ethernet Adapter
==> controlplane: When choosing an interface, it is usually the one that is
==> controlplane: being used to connect to the internet.
==> controlplane:
controlplane: Which interface should the network bridge to?
At the end of the deployment, it will tell you how to access NodePort services from your browser once you have configured Kubernetes. Make a note of this.
If you encountered issues starting the VMs, you can try NAT mode. Note that in NAT mode you will not be able to connect to your NodePort services using your browser without setting up port forwarding rules in VirtualBox UI.
vagrant destroy -f
vagrantfile
and change BUILD_MODE = "BRIDGE"
to BUILD_MODE = "NAT"
at line 10.There are two ways to SSH into the nodes:
From the directory you ran the vagrant up
command, run vagrant ssh <vm>
for example vagrant ssh controlplane
.
This is the easiest way as it requires no configuration.
Use your favourite SSH Terminal tool (PuTTY/MobaXTerm etc.).
Use the above IP addresses. Username and password based SSH is disabled by default.
Vagrant generates a private key for each of these VMs. It is placed under the .vagrant folder (in the directory you ran the vagrant up
command from) at the below path for each VM:
Private Key Path: .vagrant/machines/<machine name>/virtualbox/private_key
Username/Password: vagrant/vagrant
vagrant ssh
If any of the VMs failed to provision, or is not configured correct, delete the VM using the command:
vagrant destroy <vm>
Then re-provision. Only the missing VMs will be re-provisioned
vagrant up
Sometimes the delete does not delete the folder created for the VM and throws an error similar to this:
VirtualBox error:
VBoxManage.exe: error: Could not rename the directory 'D:\VirtualBox VMs\ubuntu-bionic-18.04-cloudimg-20190122_1552891552601_76806' to 'D:\VirtualBox VMs\kubernetes-ha-worker-2' to save the settings file (VERR_ALREADY_EXISTS)
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component SessionMachine, interface IMachine, callee IUnknown
VBoxManage.exe: error: Context: "SaveSettings()" at line 3105 of file VBoxManageModifyVM.cpp
In such cases delete the VM, then delete the VM folder and then re-provision, e.g.
vagrant destroy node02
rmdir "<path-to-vm-folder>\node02
vagrant up
This will most likely happen at “Waiting for machine to reboot”
CTRL+C
ruby
process, or Vagrant will complain.vagrant destroy <vm>
vagrant up
You do not need to complete the entire lab in one session. You may shut down and resume the environment as follows, if you need to power off your computer.
To shut down. This will gracefully shut down all the VMs in the reverse order to which they were started:
vagrant halt
To power on again:
vagrant up
When you have finished with your cluster and want to reclaim the resources, perform the following steps
Run the following
vagrant destroy -f
Next: Connectivity
Prev: Prerequisites