I’ve been using VirtualBox for a while to do experiments that require machines clustering. In some cases, my VMs need to connect to a private network through a VPN client (eg: Cisco AnyConnect). It works most of the time but occasionally it does not. I was in the situation where I was ready for a demo, but all of the sudden, things felt apart around 30 minutes before the show because my VMs could not connect to the private network after a host reboot (forced) with the helpfulness unknown host message .
So I’m running MacOS , VirtualBox 5 and here were the steps I took to bring my pieces back together
I’m putting here the “magic” command which saved my demo, just for a record for myself and in case the Force wants to challenge you with similar situation.
Shutdown the VM(s) $ sudo shutdown now
Run $VBoxManage modifyvm <my-vm-name> --natdnshostresolver1 on
Start the VM(s)
Take a deep breath and feel thankful, get ready for what’s next.
Despite the minimum infrastructure requirement for an OpenShift cluster is 4 CPUs, 16 GB RAM for master, 8 GB RAM for nodes, and a lot more, I decided to give it a go on my poor little Mac, using VirtualBox.
Here is how the cluster will look like:
Prepare the VMs
In this experiment, I’m using VirtualBox 6.0 for Mac, and CentOS 7 Minimal base image for all nodes in the cluster. First we need to make sure all the nodes can communicate to each other and to the internet. I use NAT adapter to to enable internet connectivity from the VMs through the shared internet connection on my Mac, and attach all VMs to a Host-only adapter for them to communicate to each other as well as enabling connection (eg: SSH, etc) from my Mac to the VMs. For more details how to do that, refer to my previous post which I setup a similar topology for my IBM Cloud Private experiment here
Launch the OS installation by following the instruction as if you’re installing a normal CentOS machine. In this case, I manually created the partition as following
And here is the network configuration for the interface associated to the Host-only adapter for the master VM which you can replicate to the remaining 2 nodes (compute: 192.168.56.8 and infra: 192.168.56.9) accordingly ($ nmtui)
Setup the base tools
Once you have the 3 VMs prepared with CentOS 7 installed and configured, we can install the base tools necessary for the installation process on all nodes.
Put this script into a bash file:
$ vi base_bash.sh
#!/bin/bash
# simple bash script to install base packages for OKD v3.9
sudo yum -y update
# Install the CentOS OpenShift Origin v3.9 repo & all base packages
sudo yum -y install centos-release-openshift-origin39 wget git net-tools \
bind-utils yum-utils iptables-services bridge-utils bash-completion \
kexec-tools sos psacct vim git mlocate
# create .ssh folder in /root. Update the path if you plan to use a non-root
# user with Ansible.
mkdir -p /root/.ssh
# create passwordless ssh key for root.
ssh-keygen -t rsa \
-f /root/.ssh/id_rsa -N ''
sudo yum -y update
# Install the Extra Packages for Enterprise Linux (EPEL) repository
sudo yum -y install \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# disable EPEL repo to prevent package conflicts
sudo sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo
# Install PyOpenSSL from EpEL repo
sudo yum -y --enablerepo=epel install pyOpenSSL
# install ansible-2.4.3.0 from CentOS archives
sudo yum -y install \
https://cbs.centos.org/kojifiles/packages/ansible/2.4.3.0/1.el7/noarch/ansible-2.4.3.0-1.el7.noarch.rpm
sudo yum -y install \
https://cbs.centos.org/kojifiles/packages/ansible/2.4.3.0/1.el7/noarch/ansible-doc-2.4.3.0-1.el7.noarch.rpm
# Reboot system to apply any kernel updates
sudo reboot
Execute the script:
$ bash base_bash.sh
OpenShift requires wildcard DNS resolution in order to resolve OpenShift routes. This can be configured either with an internal DNS resolver (eg: DNSMasq), or by using a public wildcard DNS resolver like xip.io or nip.io. To make it simple, I use the xip.io option. With xip.io, a DNS entry like this: something.cool.<IP_ADDRESS>.xip.io will be resolved to IP_ADDRESS ip (the node needs to be connected to the internet). For example:
[root@master ~]# ping -c3 duyhard.master.192.168.56.7.xip.io
PING duyhard.master.192.168.56.7.xip.io (192.168.56.7) 56(84) bytes of data.
64 bytes from master (192.168.56.7): icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from master (192.168.56.7): icmp_seq=2 ttl=64 time=0.051 ms
64 bytes from master (192.168.56.7): icmp_seq=3 ttl=64 time=0.050 ms
Now, run these commands on each node accordingly to set up proper host names
On master: $ hostctl set-hostname master.192.168.56.7.xip.io
On infra: $ hostctl set-hostname infra.192.168.56.9.xip.io
On compute: $ hostctl set-hostname compute.192.168.56.8.xip.io
And then edit the /etc/hosts file in all nodes to be like this:
$ vi /etc/hosts
192.168.56.7 master master.192.168.56.7.xip.io
192.168.56.9 infra infra.192.168.56.9.xip.io
192.168.56.8 compute compute.192.168.56.8.xip.io
Now enable the ssh access among all nodes by copying the public keys for each nodes to the remaining ones, run this command on all nodes:
Now, lets install OpenShift using openshift-ansible. The OpenShift (v3.9) distribution we’re about to install is OKD, the upstream version of OpenShift which’s fully opensourced and is used as a basis for OpenShift dedicated, OpenShift online and OpenShift enterprise.
$ yum install -y openshift-ansible
Configure inventory file for OpenShift installation
We’re using HTPasswd as Identity Provider for authenticating the access to the cluster, so lets create a user and store the information in /etc/origin/master/htpasswd
as configured in the inventory file:
root@master playbooks]# oc get nodes
NAME STATUS ROLES AGE VERSION
compute.192.168.56.8.xip.io Ready compute 8m v1.9.1+a0ce1bc657
infra.192.168.56.9.xip.io Ready <none> 8m v1.9.1+a0ce1bc657
master.192.168.56.7.xip.io Ready master 2h v1.9.1+a0ce1bc657
Create a new user to access your cluster
[root@master playbooks]# htpasswd -b /etc/origin/master/htpasswd \
> duynguyen spiritedengineering.net
Adding password for user duynguyen
Now you can access the console GUI via your browser: https://master.192.168.56.7.xip.io:8443
After providing the uesr name / password that you created using HTPasswd, you will see the beautiful OpenShift catalog
In the next posts, I will discuss how to develop and deploy applications on the OpenShift cluster. Stay turned!