wiki:GENIExperimenter/Tutorials/GENI-SAVI/DesignSetup

Version 5 (modified by rick@mcgeer.com, 9 years ago) (diff)

--

Get to Know GENI and SAVI

Hello GENI index Hello GENI index Hello GENI index

STEPS FOR SETTING UP


1.Use the SAVI Client and the Federation Tool to Create a GENI Slice OMNI Bundle

  • Use SFTP to transfer the Omni Bundle you downloaded in pre-work to the Downloads folder on client1.savitestbed.ca. From the folder in which you have the Omni bundle, run
    $ sftp <savi-username>@client1.savitestbed.ca
    > cd Downloads
    > put omni.bundle
    > bye
    
  • Using your SAVI credentials, log in to client1.savitestbed.ca. Any ssh tool can be used for this, including Putty and cygwin ssh on Windows, and the builtin terminal tools on any Unix- or Linux-based

system.

$ ssh <savi-username>@client1.testbed.ca

  • Once you're logged in, run

$ omni-configure

to configure Omni.

Now download and unpack the GENI-SAVI Federation Tool

$ wget http://web.uvic.ca/~sushilb/federation/tutorial.tar

$ tar xvf tutorial.tar

Generate keys to be used on the SAVI VMs you will be creating

$  cd tutorial
$ ./tutorial.sh generatekey

It will ask you for your SAVI username and password. Then it will ask you for the key name. Use the name "savi-tutorial".

Pro Tip: Do not use a passphrase!.

Your SAVI keys are now in tutorial/keys, and your GENI keys are in your .ssh folder on client1.savitestbed.ca. Check to make sure that the keys are there.


2. Create a slice on GENI and Add Some VMs to it

We will now create a slice on GENI. Use your GENI username as the slice name. In the tutorial directory,

$ ./tutorial.sh createslice geni <your_geni_username>

Now add VMs at the Cornell and Utah Downtown Data Center racks

$ ./tutorial.sh createvm geni slice-name https://geni.it.cornell.edu:12369/protogeni/xmlrpc/am/2.0 Ubuntu-14-04
$ ./tutorial.sh createvm geni slice-name https://boss.utahddc.geniracks.net:12369/protogeni/xmlrpc/am/2.0 Ubuntu-14-04

This will take about a minute. It will then come back with a response of the form

  Result Summary: Got Reserved resources RSpec from utahddc-geniracks-net. Reservation at utahddc-ig in slice rizpan expires at 2015-05-25 22:59:40 (UTC).
"hostname="<name>.utahddc.geniracks.net"

 To connect to the created VM please use to the given hostname="<name>.utahddc.geniracks.net"

The machine will now be in a booting state. It will take about 5-10 minutes before you can log in. We'll use the time productively and create some SAVI VMs while we wait.


3. Create VMs on SAVI at Toronto and Victoria

The general form of the command to create a VM on SAVI is

$ ./tutorial.sh createvm savi <tenant_name> <location> <os_image_name> <machine_name> <access_key>l <vm-name>

we have chosen values for all parameters. The vm-name should be your geni-username

$ ./tutorial.sh createvm savi <tenant_name> toronto ubuntu-14-04-64 small savi_tutorial <geni_username>
$  ./tutorial.sh createvm savi <tenant_name> victoria ubuntu-14-04-64 small savi_tutorial <geni_username>

We now have VMs at Cornell, Utah, Toronto, and Victoria.

$ ssh -i id_rsa -F ssh-config slice338.pcvm3-1.instageni.metrodatacenter.com

Type ifconfig eth0. You should see an eth0 interface with an IP address in the 10.0.0.0/8 range, a private IP address.

root@slice347:~# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr a6:24:c1:df:c7:09
          inet addr:10.20.0.243  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::a424:c1ff:fedf:c709/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:56555 errors:0 dropped:2 overruns:0 frame:0
          TX packets:49525 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:93890170 (93.8 MB)  TX bytes:8052865 (8.0 MB)

This IP address is behind a NAT and not directly reachable from outside (i.e., from the Internet). However all nodes in your slicelet can contact each other using the private addresses.

The nodes in your ssh-config file are Docker containers running on (virtual) hosts. These hosts have public IP addresses that are routable from the Internet, that's why you can login to the Docker containers of your slicelet. A port on the public IP address (e.g., 49155) is forwarded to the SSH port (22) on the container's private IP address. The ssh-config file contains the names of the virtual hosts and the port forwarding information.

When you run a server on a node in your slicelet, it will listen on the private address, because that is the only one it can see. By default it will only be visible to other nodes in your slicelet. Docker provides facilities for exposing ports to the public Internet, but this topic is outside the scope of this tutorial.

Note that later on you will need to find both the public and private IP addresses for each node in the slicelet.


3. Configure the Ansible controller for your slicelet

If you already have Ansible installed on your laptop, then you can use it as the controller for this experiment. Otherwise, you will use one of the nodes in your slicelet as the Ansible controller. Pick one, log into it, and install ansible using apt-get:

$ apt-get update
$ apt-get install ansible

Then upload your slicelet helper files to that node. scp can be used for this; in the directory where you unzipped the files, run a command like the following:

$ scp -F ssh-config * ansible-hosts slice323.pcvm3-1.instageni.metrodatacenter.com:
Pro Tip: remove your controller node from the ansible_hosts file after you’ve uploaded.

4. Learn some basic concepts of Ansible

Ansible (http://docs.ansible.com) is a free, open-source, intuitive IT automation tool that is well-suited to the tasks in this tutorial. Ansible commands can be run from the command line or put in a YAML file called a playbook. We will be creating an Ansible playbook to run the parameterized HTTP query described earlier.

Two basic concepts in Ansible are inventories and modules. An inventory is a list of hosts to be managed by Ansible, organized into groups. When you run Ansible commands, either from the command-line or in a playbook, you specify the host group that the command should operate on. In this way Ansible commands can operate on many hosts in parallel. Take a look at the Ansible inventory in your ansible-hosts file. This is basically the equivalent of the ssh-config except its specialized for Ansible.

A task in Ansible consists of a module and some arguments for the module. A module provides a declarative abstraction on top of standard shell commands. So for example, in the shell on an Ubuntu machine you might install package “foo” like this:

$ sudo apt-get update
$ sudo apt-get install foo

An equivalent Ansible task in a playbook would look like:

- apt: name=foo state=latest update_cache=yes 

Or the same Ansible task could be invoked directly on the command line like this:

$ ansible remote-machine -m apt -a "name=foo state=latest update_cache=yes"

The task uses the apt module, and tells Ansible: “Make sure the latest version of package foo is installed”. There are many other modules which are well-documented at http://docs.ansible.com.

Here are a few Ansible tasks to run, to get some experience with the command-line interface.

(a) The ping module

The ping module simply tries to do a SSH login to a node and reports success or failure. Run the following command on your controller:

$ ansible nodes -i ansible-hosts -m ping

If you don’t see success everywhere then there is something wrong with your setup. Ask one of the tutorial leaders for help.

(b) The shell module

The shell module lets you run arbitrary SSH commands in parallel across a set of hosts. It’s useful for poking around, or if there is no Ansible module with the functionality you need. Try it out:

$ ansible nodes -i ansible-hosts -m shell -a "hostname"

You can replace hostname above with any other Linux command.

(c) The setup module

The setup module gathers a bunch of information about each node and saves it in variables that you can reference in your Ansible playbooks. This will be really useful to do the tutorial! Try it out on a node to see what it collects (replace <your-slicelet> with your slicelet’s name):

$ ansible <your-slicelet>.pcvm1-1.instageni.wisc.edu -i ansible-hosts -m setup

(d) A simple playbook

Next, we will look at a simple Ansible playbook. An Ansible playbook is a YAML file containing a list of Ansible tasks. Copy the playbook below into a file called test.yaml:

---
- hosts: nodes
  remote_user: root
  tasks:
  - name: An example of a debug statement
    debug: var=ansible_hostname

Run the playbook as:

$ ansible-playbook -i ansible-hosts test.yaml

The setup module is run automatically at the beginning of a playbook to populate variables for each node. The above playbook will dump the value of each node’s ansible_hostname variable. To run the playbook on a single node, replace nodes with the name of one of your slice nodes (e.g., slice338.pcvm3-7.instageni.nps.edu).


5. Create and run an Ansible playbook to install the software you'll need

Now you should have enough knowledge about Ansible to write a playbook to install the software you'll need on all the nodes. Below is a skeleton Ansible playbook that you can use. You will need to fill in the bits marked # INSERT ARGUMENTS HERE to perform the actions described in the name lines. If you get stuck at any point, you can take a look at this solution.

---
- hosts: nodes
  remote_user: root
  tasks:
  - name: Update apt cache
    apt: # INSERT ARGUMENTS HERE

  - name: Install dnsutils (for dig)
    apt: # INSERT ARGUMENTS HERE

  - name: Install geoip-bin (for geoiplookup)
    apt: # INSERT ARGUMENTS HERE

Run this playbook on your Ansible control machine against all the nodes in your slice.

Pro Tip: use the -f argument to ansible-playbook to speed things up -- it lets you control the number of nodes to operate on in parallel, and the default is 5. Specifying -f 20 will run the playbook's tasks against all your slicelet nodes in parallel.

Next: Run Experiment

Attachments (2)

Download all attachments as: .zip