Keep your hands dirty: Carry your Edge node around by a Drone

Internet of Things (IoT) has been around for a while and It is gaining momentum day by day. Billions of devices have been connected to the global network and the number is exponentially growing . Very often, tremendous amount of data generated by those connected devices are sent across long routes to the Cloud for processing. With edge computing, the data can be pre-processed/analyzed at the edge (close to where the data was generated) to reduce unnecessary backhaul traffic to the Cloud or even help to make decision based on the data right at the edge.

In my research, I need to collect data samples from various positions and provide the data as input for AI training happening at the Edge. The refined/analyzed data then will be sent to IBM Cloud for persisting and performing more advanced analyzing as needed. The task would have been very challenging if I had to do it manually but thanks to the affordable Tello EDU drone, I could automate it with very little coding effort.

I soldered one edge node which is a Raspberry Pi 0 W with a bunch of microservices running in side, to the drone and let it fly automatically on a pre-defined route by deploying a simple Node-Red flow on it. The drone has quite comprehensive set of API for us to program with. Using Node-Red does not look that appealing but practical and it does the job with very little code. The trick here is to “compose” the commands, then split and put them into a queue for execution one by one.

Node-Red flow for drone mission with pre-defined flight path

The outcome looks promising in a test flight:

Auto pilot flying mission

Here is the full Node-Red flow that I deployed and run:

[{"id":"4dbfac54.ca76e4","type":"tab","label":"Auto Pilot Flight Plan","disabled":false,"info":""},{"id":"1e1c37d9.fea328","type":"debug","z":"4dbfac54.ca76e4","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","x":949,"y":394,"wires":[]},{"id":"65815c4d.3b3774","type":"function","z":"4dbfac54.ca76e4","name":"Command queue","func":"// Use a message queue to handle this mission\nvar queue = context.get(\"queue\") || [];\nvar busy = context.get(\"busy\") || false;\n\n// Ready for processing next command\nif (msg.hasOwnProperty(\"ready\")) {\n    if (queue.length > 0) {\n        var message = queue.shift();\n        msg.payload = message.command;\n        return msg;\n    } else {\n        context.set(\"busy\", false);\n    }\n// This happens the first time the node is processed\n} else {\n    // This builds up the command queue\n    if (busy) {\n        queue.push(msg.payload);\n        context.set(\"queue\", queue);\n    // When the command queue is done building we pass the message to the next node and begin the mission\n    } else {\n        context.set(\"busy\", true);\n        msg.payload = msg.payload.command;\n        return msg;\n    }\n}\n\n// We only want messages to be passed based on the logic above\nreturn null;","outputs":1,"noerr":0,"x":659,"y":328,"wires":[["1e1c37d9.fea328","d5711baa.17ec8"]]},{"id":"7a8899f8.e50bc","type":"function","z":"4dbfac54.ca76e4","name":"Ready","func":"msg.ready = true;\nreturn msg;","outputs":1,"noerr":0,"x":521,"y":415,"wires":[["65815c4d.3b3774"]]},{"id":"d5711baa.17ec8","type":"udp out","z":"4dbfac54.ca76e4","name":"","addr":"192.168.10.1","iface":"","port":"8889","ipv":"udp4","outport":"52955","base64":false,"multicast":"false","x":932,"y":254,"wires":[]},{"id":"5d8260bc.25d148","type":"udp in","z":"4dbfac54.ca76e4","name":"","iface":"","port":"52955","ipv":"udp4","multicast":"false","group":"","datatype":"utf8","x":179,"y":505,"wires":[["6f1cadd6.e7effc"]]},{"id":"6f1cadd6.e7effc","type":"switch","z":"4dbfac54.ca76e4","name":"","property":"payload","propertyType":"msg","rules":[{"t":"eq","v":"ok","vt":"str"},{"t":"else"}],"checkall":"true","repair":false,"outputs":2,"x":329,"y":505,"wires":[["54b3260c.3f662"],[]]},{"id":"f54fadcc.3cdd7","type":"template","z":"4dbfac54.ca76e4","name":"Drone Mission","field":"payload","fieldType":"msg","format":"handlebars","syntax":"plain","template":"command\ntakeoff\nup 70\nforward 450\ncw 90\nforward 450\ncw 90\nforward 450\ncw 90\nforward 400\ncw 90\nforward 400\ncw 90\nforward 400\ndown 30\ncw 90\nforward 350\ncw 90\nforward 350\ncw 90\nforward 350\nup 50\ncw 90\nforward 300\ncw 90\nforward 300\ncw 90\nforward 300\ncw 90\nforward 250\ncw 90\nforward 250\ncw 90\nforward 250\nup 60\ncw 90\nforward 200\ncw 90\nforward 200\ncw 90\nforward 200\ndown 50\ncw 90\nforward 100\ncw 90\nforward 100\ncw 90\nforward 100\ncw 90\nforward 50\ncw 90\nforward 50\ncw 90\nforward 50\ncw 90\nup 120\nflip f\nforward 50\ncw 90\nforward 50\ncw 90\nforward 50\ncw 90\nflip f\nforward 100\ncw 90\nforward 100\ncw 90\nforward 100\ncw 90\nflip f\nforward 150\ncw 90\nforward 150\ncw 90\nforward 150\ncw 90\nforward 200\ncw 90\nforward 200\ncw 90\nforward 200\ncw 90\nforward 250\ncw 90\nforward 250\ncw 90\nforward 250\ncw 90\nforward 300\ncw 90\nforward 300\ncw 90\nforward 300\ncw 90\nforward 350\ncw 90\nforward 350\ncw 90\nforward 350\ncw 90\nforward 350\ncw 90\nland","output":"str","x":275,"y":368,"wires":[["d109059.e2b9578"]]},{"id":"d109059.e2b9578","type":"csv","z":"4dbfac54.ca76e4","name":"Split commands","sep":",","hdrin":"","hdrout":"","multi":"one","ret":"\\n","temp":"command","skip":"0","x":454,"y":219,"wires":[["65815c4d.3b3774"]]},{"id":"3da54070.a7b7f8","type":"inject","z":"4dbfac54.ca76e4","name":"Launch","topic":"","payload":"true","payloadType":"bool","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":125,"y":249,"wires":[["f54fadcc.3cdd7"]]},{"id":"f5ab9078.b0e4b","type":"comment","z":"4dbfac54.ca76e4","name":"Build the mission and send commands only when the drone is ready","info":"","x":265.5,"y":171,"wires":[]},{"id":"54b3260c.3f662","type":"delay","z":"4dbfac54.ca76e4","name":"","pauseType":"delay","timeout":"1","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"x":497.5,"y":498,"wires":[["7a8899f8.e50bc"]]},{"id":"beb44409.95d2f8","type":"inject","z":"4dbfac54.ca76e4","name":"Abort Mayday Abort","topic":"","payload":"land","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":"","x":227,"y":87,"wires":[["8eabbf5a.05afe"]]},{"id":"8eabbf5a.05afe","type":"change","z":"4dbfac54.ca76e4","name":"Emergency landing","rules":[{"t":"delete","p":"payload","pt":"msg"},{"t":"set","p":"payload.tellocmd","pt":"msg","to":"land","tot":"str"},{"t":"set","p":"payload.distance","pt":"msg","to":"0","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":504,"y":72,"wires":[["cc199cfa.0a5a2"]]},{"id":"cc199cfa.0a5a2","type":"function","z":"4dbfac54.ca76e4","name":"Send Tello Command","func":"var telloaction ;\n\nif( msg.payload.distance != \"0\") {\n   telloaction = new Buffer( msg.payload.tellocmd + ' '+ msg.payload.distance );\n} else {\n   telloaction = new Buffer( msg.payload.tellocmd );\n}\n\nmsg.payload = telloaction;\t\nreturn msg;\n","outputs":1,"noerr":0,"x":767,"y":87,"wires":[["d5711baa.17ec8"]]},{"id":"192bec1a.026024","type":"comment","z":"4dbfac54.ca76e4","name":"Put the commands in a queue to execute one by one","info":"","x":802,"y":436,"wires":[]}]

Put IBM Cloud Private into your … laptop

This tutorial walks you through steps to setup an IBM Cloud Private cluster on your workstation (eg: Your laptop). You can extend the application of this tutorial to a more advanced environment accordingly to your need.
The environment I’m using in the tutorial is Mac OS High Sierra 10.13.4 with VirtualBox 5.2.18 and IBM Cloud Private Community Edition. Data persistent and storage topics are also not discussed in this tutorial, so all data will be ephemerally stored inside the VMs

System requirements

System requirements to install IBM Cloud Private (ICP) varies based on your architecture. ICP supports Linux 64 bit (Red Hat Enterprise Linux and Ubuntu 16.04 LTS) or Linux on IBM Power. In this tutorial, I use Ubuntu 16.04 LTS.
Regarding resources requirement, as a minimum:
  • Boot node: 4G RAM, 100GB disk
  • Master node: 4G RAM, 151 GB disk
  • Proxy node: 4G RAM, 40 GB disk
  • Worker node: 4G RAM, 100 GB disk

Determine your cluster architecture

First you need to decide the architecture of your cluster. A standard configuration for a multi-node ICP cluster is

  • A single master
  • A single proxy node
  • Three worker nodes

In this tutorial though, we will compact the architecture even more, to make it 3 virtual machines (VMs) only

  • One VM contains both proxy and master node
  •  Two VMs each hosts a worker node

Here is how it looks:

Deployment topology

Setup infrastructure

Configure VirtualBox network

Create internal private network for the nodes

As shown in the cluster architecture, I will create a Host-Only network interface to attach all the nodes in. With Host-Only network adapter, I can access the nodes from my host

$ VBoxManage hostonlyif create

After that command, VBoxManage will present us the network name (eg: vboxnetx). Use it for next command to configure the network IP:

$ VBoxManage hostonlyif ipconfig vboxnet3 --ip 173.0.1.1

Then configure the DHCP server:

$ VBoxManage dhcpserver add --ifname vboxnet3 --ip 173.0.1.1 --netmask 255.255.255.0 --lowerip 173.0.1.100 --upperip 173.0.1.200
$ VBoxManage dhcpserver modify --ifname vboxnet3 --enable

Enable internet access from the VMs

We can either create a NAT network or use the defautl NAT option to allow the VMs to access the internet through my Mac. To make it simple, I will attach the NAT adapter to my VMs. We will do this once we have the VM provisioned

Provision the VMs

We will provision the first VM (boot-master-proxy), setup necessary softwares on it, then clone it to make the other two VMs to minimize the repeated steps.
Download the image from here:  http://releases.ubuntu.com/16.04/ubuntu-16.04.5-server-amd64.iso
Depending on how much resource you have in your host, you will provision your VM accordingly. We allocate 8GB RAM, 80GB to the first VM
Assuming we’ve done with provisioning your VM based on the Ubuntu image, now on VirtualBox GUI, go to Settings view of the VM, navigate to Networks and then attach 2 adapters to the VM as following

NAT for internet access from the VM

Attach NAT adapter to the VM

Host-Only, select the one you created before, to enable access to the VM from host

Attach Host-Only adapter to the VM
Now launch the VM and setup the OS following the instruction. Once you have the OS ready, assign a static IP address to the VM by changing the configuration in /etc/network/interraces

Since the VM is attached to the Host-Only network interface, we can ssh to it from the host and modify the network interfaces configuration:

$ssh <user>@<boot-master-proxy node's ip address>
$sudo vi /etc/network/interfaces

make it something like this:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface (NAT)
auto enp0s3
iface enp0s3 inet dhcp
# The Host-Only network
auto enp0s8
iface enp0s8 inet static
        address 172.0.1.100 # This is the master node's IP address
        netmask 255.255.255.0

Notes:  Sometimes the network interface name (eg: enp0sx) does not show up in the /etc/network/interfaces file hence you need to figure out what the name is for each adapter. Use this command inside the VM to determine

dmesg | grep eth

Change the host name of the VM, make it boot-master-proxy

$ sudo vi /etc/hostname # then make it boot-master-proxy

Install necessary softwares on the boot-master-proxy node

Enable root login remotely via ssh to the VM

Set a password for root by SSH to the VM and execute these from inside

$sudo su - # provide your user's password here
$passwd

Enable remote login as root

$ sed -i 's/prohibit-password/yes/' /etc/ssh/sshd_config
$ systemctl restart ssh

Update Net Time Protocol

This is to make sure time stays in sync

$ sudo apt-get install -y ntp
$ sytemctl restart ntp

Configure Virtual Memory setting

$ sudo vi /etc/sysctl.conf

Add this line in then reboot the VM

# Increase memory map areas to 262144
sysctl -w vm.max_map_count=262144

then reboot

$ sudo reboot now

Install Docker and tools

$ sudo apt-get update && sudo apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual
$ sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb\_release -cs) stable"
$ sudo apt-get update
$ sudo apt-get install -y docker-ce
$ sudo apt-get install -y python-setuptools && sudo easy_install pip

Create the worker nodes by cloning the first VM

At this point, we’ve had the foundation for installing IBM Cloud Private. Lets shutdown the VM (`shutdown -h now`) and clone it to make the 2 worker nodes

From host machine, do these commands:

$ vboxmanage clonevm boot-master-proxy --name worker1
$ vboxmanage registervm ~/VirtualBox\ VMs/worker1/worker1.vbox
$ vboxmanage clonevm boot-master-proxy --name worker2
$ vboxmanage registervm ~/VirtualBox\ VMs/worker2/worker2.vbox

Update network configuration on each worker node

Now we can start all VMs using VirtualBox GUI or command line (better user GUI). VirtualBox will give us a command line interface for interacting with the VMs. Provide credentials to login to the VM and do further configuration. For example, with worker1

Login screen of worker1 VM

Change the host name of the VM to worker1

$ sudo vi /etc/hostname # change it to worker1

Change /etc/hostsconfiguration, to add these lines in

$ sudo vi /etc/hosts
# Add these lines in:
172.0.1.100 boot-master-proxy
172.0.1.101 worker1
172.0.1.102 worker2

Assign a static IP address to the VM by changing the configuration in /etc/network/interfaces to make it look like this

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface (NAT)
auto enp0s3
iface enp0s3 inet dhcp
# The Host-Only network
auto enp0s8
iface enp0s8 inet static
        address 172.0.1.101 # This is worker1's IP address
        netmask 255.255.255.0

then reboot the VM $ sudo reboot now

Repeat those configuration steps for worker2 VM with worker2 as hostname, 172.0.1.102 as the VM’s static IP address

Install IBM Cloud Private CE

Now we’re ready to install ICP CE onto your VMs. First make sure you have all of 3 VMs started. Login to the boot-master-proxy VM through ssh from host machine

$ ssh <user>@172.0.1.100
$ sudo su - # provide credentials here
If you have SE-Linux, turn it off

$ setenforce 0

Configure passwordless ssh tunnels from boot-master-proxy to worker nodes

Now configure to enable passwordless SSH from boot-master-proxy node to the 2 worker nodes. First, generate SSH key

$ ssh-keygen -t rsa -P '' # Accept default values by hitting enter

Now copy the resulting key (id_rsa) to all nodes in the cluster

$ ssh-copy-id -i .ssh/id_rsa root@boot-master-proxy
$ ssh-copy-id -i .ssh/id_rsa root@worker1
$ ssh-copy-id -i .ssh/id_rsa root@worker2

Now we can ssh from boot-master-proxy node to the worker nodes without having to provide password. For example, to access worker1 from boot-master-proxy:

$ ssh root@worker1

Install IBM Cloud Private CE from boot-master-proxy node

Create installation directory and launch a docker container to pull the installation materials

$ mkdir -p /opt/icp
$ cd /opt/icp
$ docker pull ibmcom/icp-inception:2.1.0.3
$ docker run -e LICENSE=accept --rm -v /opt/icp:/data ibmcom/icp-inception:2.1.0.3 cp -r cluster /data
$ cd cluster

Now copy the ssh key to the installation directory

$ cp ~/.ssh/id_rsa /opt/icp/cluster/ssh_key
$ chmod 400 /opt/icp/cluster/ssh_key

Configure IP addresses of the nodes in /opt/icp/cluster/hosts

$ sudo vi /opt/icp/cluster/hosts

Make it look like this

[master]
172.0.1.100


[worker]
172.0.1.101
172.0.1.102


[proxy]
172.0.1.100

Run the installation

$ docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install

Wait for about 30 minutes or so, until you’re presented a happy message like shown below, then the IBM Cloud Private CE version is successfully deployed on the VMs

PLAY RECAP *************************************************************************************************************************************************************************************************
172.0.1.100                : ok=159  changed=76   unreachable=0    failed=0   
172.0.1.101                : ok=101  changed=40   unreachable=0    failed=0   
172.0.1.102                : ok=97   changed=37   unreachable=0    failed=0   
localhost                  : ok=69   changed=47   unreachable=0    failed=0   


POST DEPLOY MESSAGE ****************************************************************************************************************************************************************************************

The Dashboard URL: https://172.0.1.100:8443, default username/password is admin/admin

Playbook run took 0 days, 0 hours, 23 minutes, 56 seconds

We can now access the ICP cluster’s web console from the host machine via this link https://172.0.1.100:8443 using the default credentials.

The happy login screen of IBM Cloud Private

 

Once logged in, navigate to Catalog link to see the charts, services that can help us to start building our apps.

Catalog screen of the deployed IBM Cloud Private

In next posts, I will discuss how to deploy workloads on IBM Cloud Private. Stay turned.