Vagrant :: SSH Inter-Connectivity of Multi Virtual Machines

Vagrant is one of the best example of Infrastructure as a Code (IAC) tools (VM based). It works based on the declarative configuration file which consists of requirements like OS, Apps, Users and Files etc…

By using Vagrant, we can reduce mundane tasks of downloading OS images, manual installation of OS, APPs, User Configuration and security etc… It saves a lot of time and efforts for the developers, Admins and as well as Architects. Vagrant is a cross-platform product and its free to use the community edition. Vagrant also has its own cloud where a thousands of OS with Apps images are uploaded by the active contributors. For more info and to download this great product please visit here. Please install Oracle Virtualbox which is one of the basic requirements to run the vagrant VMs.

Multi Machines: Is a type of Vagrant configuration where multiple machines can be build using a single configuration file. This is best suited for development where multiple VMs are required whether its a homogeneous/heterogeneous configuration. For example in a typical webapp development, a separate Web, DB, Middleware, Proxy servers along with client VMs required to match the production class environment.

Below Vagrant configuration file is a use-case for setting up ‘Ansible Practice Lab’. 6 nodes are build by vagrant running on CentOS 7. This LAB environment is build for the purpose of learning Ansible hands-on workshop to try out all the features offered by Ansible to automate configuration management and infrastructure automation. Ansible package is installed on node 1 and rest of the nodes are managed by ansible workstation – node1.

Later YUM is installed and configured by downloading EPEL repository across all 6 nodes using the global shell script. Post yum configuration, basic packages like wget, curl and sshpass are installed.

Most important requirement for Ansible to work is to enable SSH key based authentication between all 6 nodes. For this to work, a shell script ssh is written and added to configuration file which will be executed by vagrant during the build process. An interface with Private IP is configured across all nodes which is used to for the nodes inter-connectivity via SSH.

Here is the vagrant multi machines configuration file along with custom scripts to install packages and setup SSH key based authentication between the nodes

# Vagrant configuration file for multi machines with inter connectivity via SSH key based authentication
numnodes=6
baseip="192.168.10"

# global script
$global = <<SCRIPT

# Allow SSH to accept SSH password authentication. Find and replace if the line is commented out
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config

# Add Google DNS to access internet. 
echo "nameserver 8.8.8.8" | sudo tee -a  /etc/resolv.conf 

# Download and install  Centos 7 EPEL package to configure the YUM repository
sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
# Update yum
sudo yum update -y
# Install wget curl and sshpass
sudo yum install wget curl sshpass -y

# Exclude node* from host checking
cat > ~/.ssh/config <<EOF
Host node*
   StrictHostKeyChecking no
   UserKnownHostsFile=/dev/null
EOF

# Populate /etc/hosts with the IP and node names 
for x in {11..#{10+numnodes}}; do
  grep #{baseip}.${x} /etc/hosts &>/dev/null || {
      echo #{baseip}.${x} node${x##?} | sudo tee -a /etc/hosts &>/dev/null
  }

done
yes y |ssh-keygen -f /home/vagrant/.ssh/id_rsa -t rsa -N ''
echo " **** SSH Key Pair created for node$c ****"

SCRIPT

# SSH configuration script
$ssh = <<SCRIPT1
numnodes=6

for (( c=1; c<$numnodes+1; c++ ))
do
    echo "$c"
    echo "node$c"
    if [ "$HOSTNAME" = "node1" ]; then
      echo "**** Install ansible on node1 ****"
      sudo yum install ansible -y
    fi
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        echo "node$c"
        continue
    fi

    # Copy the current host's id to each other host.
    # Asks for password.
    # create ssh key
    
    sshpass -p vagrant ssh-copy-id "node$c"
    echo "**** Copied public key to node$c ****"    
done

# Get the id's from each host.
for (( c=1; c<$numnodes+1; c++ ))
do
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        continue
    fi

    sshpass -p vagrant ssh "node$c" 'cat .ssh/id_rsa.pub' >> /home/vagrant/host-ids.pub
    echo "**** Copy id_rsa.pub contentes to host-ids.pub for host node$c ****"
done

for (( c=1; c<$numnodes+1; c++ ))
do
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        continue
    fi

    # Copy public keys to the nodes
    sshpass -p vagrant ssh-copy-id -f -i /home/vagrant/host-ids.pub "node$c"
    echo "**** Copy public keys to node$c ****"

done
# Set the permissions to config
sudo chmod 0600 /home/vagrant/.ssh/config
# Finally restart the SSHD daemon
sudo systemctl restart sshd
echo "**** End of the Multi Machine SSH Key based Auth configuration ****"

SCRIPT1

# Vagrant configuration
Vagrant.configure("2") do |config|
  # Execute global script
  config.vm.provision "shell", privileged: false, inline: $global
  prefix="node"
  #For each node run the config and apply settings
  (1..numnodes).each do |i|
    vm_name = "#{prefix}#{i}"
    config.vm.define vm_name do |node|
      node.vm.box = "centos/7"
      node.vm.hostname = vm_name
      ip="#{baseip}.#{10+i}"
      node.vm.network "private_network", ip: ip    
    end
    # Run the SSH configuration script
    config.vm.provision "ssh", type: "shell", privileged: false, inline: $ssh
  end
end

To execute the above configuration file, run the below commands

$vagrant up
$vagrant provision --provision-with ssh

Please note that the above example show vagrant user credentials by using sshpass -p option. If you want to secure use -f also read the sshpass documentation for more info. Many constants like EPEL repo URL, number of nodes, ssh key path etc.. need be customized according to your actual requirements.

To check the status of the nodes build by vagrant use the below command.

$vagrant status
Current machine states:

node1                     running (virtualbox)
node2                     running (virtualbox)
node3                     running (virtualbox)
node4                     running (virtualbox)
node5                     running (virtualbox)
node6                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

To login to the first node, and to SSH to other nodes use the below commands. Notice that there was no password prompts which SSHing between the nodes. By the way, Ansible is installed in node1 and ready to use. eth1 is the private network used for SSH inter-connectivity.

$vagrant ssh node1
Last login: Tue Jun 11 12:01:11 2019 from 192.168.10.12
[vagrant@node1 ~]$ssh node2
Warning: Permanently added 'node2,192.168.10.12' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:01:04 2019 from 192.168.10.11
[vagrant@node2 ~]$ssh node1
Warning: Permanently added 'node1,192.168.10.11' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:16:41 2019 from 10.0.2.2
[vagrant@node1 ~]$ssh node5
Warning: Permanently added 'node5,192.168.10.15' (ECDSA) to the list of known hosts.
[vagrant@node5 ~]$ssh node1
Warning: Permanently added 'node1,192.168.10.11' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:17:23 2019 from 192.168.10.12
[vagrant@node1 ~]$yum list ansible 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.dhakacom.com
 * epel: sg.fedora.ipserverone.com
 * extras: mirror.dhakacom.com
 * updates: mirrors.nhanhoa.com
Installed Packages
ansible.noarch                                             2.8.0-2.el7                                             @epel
[vagrant@node1 ~]$ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:26:10:60 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 80872sec preferred_lft 80872sec
    inet6 fe80::5054:ff:fe26:1060/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:7b:8a:ef brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.11/24 brd 192.168.10.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe7b:8aef/64 scope link
       valid_lft forever preferred_lft forever
[vagrant@node1 ~]$

Hope this use case help to understand how to install and configure multiple VMs at once with SSH inter-connectivity. Please leave your feedback if you found this blog useful and share suggestions in the below Comments section.

Image Courtesy: sumglobal.com

References:

https://www.vagrantup.com/docs/multi-machine/

https://www.vagrantup.com/docs/vagrantfile/

https://www.vagrantup.com/docs/provisioning/basic_usage.html

https://github.com/kikitux/vagrant-multimachine/edit/master/intrassh/Vagrantfile

Advertisement

Parsing JSON in GO is an Adventure

One of the tough things to do in Golang is parsing JSON! Yes, indeed its a challenge for novice like me. When it comes to Python and Ruby its an easy task thanks to JSON libraries which are pretty easy to use especially in Python.

I tried to create the struct for a complex and nested JSON data below by hands but was not succeeded after several attempts.

{
    "clients": [
        {
            "clientId": "dde46983-00000004-5cdac62e-5cdc1fc1-00025000-a4aa9156",
            "hostname": "A999US032WIN001",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/159.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "159.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 24
            }
        },
        {
            "clientId": "70f32834-00000004-5cdac630-5cdd6bf5-00195000-a4aa9156",
            "hostname": "a999us034cen001",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/170.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "170.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 6
            }
        },
        {
            "clientId": "e34f2de3-00000004-5cdac62d-5cdac62c-00015000-a4aa9156",
            "hostname": "a999us034nve001.usp01.xstream360.cloud",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/158.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "158.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 2
            }
        },
        {
            "clientId": "3084d369-00000004-5cdac62f-5cdd4e05-000e5000-a4aa9156",
            "hostname": "a999us034rhl001",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/167.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "167.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 6
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/172.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "172.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/173.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "173.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/174.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "174.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/176.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "176.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/175.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "175.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/177.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "177.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/180.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "180.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/178.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "178.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/187.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "187.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/184.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "184.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        }
    ],
    "count": 14
}

After deep (re)searching using DuckDuckGo, I finally found an easier way to convert JSON to Structs with the help of JSON-to-Go tool. Many thanks to Matt Holt for making things simple for me.

Here is the screenshot of converting JSON to Strcuts. This is the major step which was made very easy using this tool. By now you we know that we should have access to JSON data which should be static and available locally for conversion.

Now the required strcuts and types are autocreated, its time to unmarshal JSON data to strcuts and access its values. Below is the full implementation of the code.

/*
Parse and convert JSON to structs using JSON-to-Go Tool
*/
package main

import (
	"encoding/json"
	"fmt"
	"io/ioutil"
	"log"
	"os"
)

// structs generated using https://mholt.github.io/json-to-go/
type AutoGenerated struct {
	Clients []struct {
		ClientID string `json:"clientId"`
		Hostname string `json:"hostname"`
		Links    []struct {
			Href string `json:"href"`
			Rel  string `json:"rel"`
		} `json:"links"`
		ResourceID struct {
			ID       string `json:"id"`
			Sequence int    `json:"sequence"`
		} `json:"resourceId"`
	} `json:"clients"`
	Count int `json:"count"`
}

func main() {
	var info AutoGenerated
	// Reading data from JSON File
	file, e := ioutil.ReadFile("clients.json")
	if e != nil {
		fmt.Printf("File error: %v\n", e)
		os.Exit(1)
	}
	//Unmarshal json data into struct info
	if err := json.Unmarshal(file, &info); err != nil {
		log.Fatal(err)
	}

	//fmt.Printf("%+v\n", info)
	fmt.Println("CLIENT-ID,HOSTNAME,RESOURCE-ID")

	//Iterate through each value and print required types
	for _, value := range info.Clients {
		fmt.Printf("%s,%s,%s\n", value.ClientID, value.Hostname, value.ResourceID.ID)
	}

}

Hope the above example code would help you to understand the steps to convert JSON to Structs, iterate through each key value and access those required types. Please leave your feedback if you found this useful and suggestions in the below Comments section.