wiki:GEC15Agenda/AdvancedGENITopoOmni/Instructions/TopologyExperiment

Version 6 (modified by nriga@bbn.com, 11 years ago) (diff)

--

Example Experiment - Mesoscale Topologies

In this experiment we are going to examine how the underlying network topology can affect the topology of your experiment. We are going to use the two topologies that are available in Mesoscale. The new for VLAN 3715 and the one for VLAN 3716.

1. Create your experiment

In this step, we are going to setup the experiment. In this tutorial we assume that you are sufficiently comfortable with omni to verify that a listresources command works and to know when your slice is ready using sliverstatus.

  1. Create a slice, use the slicename given to you in the paper slip:
    omni.py createslice <slicename>
    
  2. Create all the slivers using the rspecs from the URL given in your paper slip. You should create a sliver in two myPlc AMs:
    omni.py createsliver -a <AM_nickname> <slicename> <rspec_url> -V1
    
    For ease the three URLs for clemson, stanford and GPO are listed here so you can copy and paste:
    http://www.gpolab.bbn.com/experiment-support/gec15/adv-omni/l3deflect/rspecs/l3deflect-myplc-clemson.rspec
    http://www.gpolab.bbn.com/experiment-support/gec15/adv-omni/l3deflect/rspecs/l3deflect-myplc-stanford.rspec
    http://www.gpolab.bbn.com/experiment-support/gec15/adv-omni/l3deflect/rspecs/l3deflect-myplc-gpo.rspec
    
  3. Check the status of your sliver
    omni.py sliverstatus -a <AM_nickname> <slicename>
    

2. Login to your nodes

The login information to your hosts is reported back in sliverstatus. Omni comes with an example script that calls sliverstatus and figures out all the information you need for login in to your hosts.

  1. First of all let's clean our .ssh/config file in case it contains information from previous experiments. Unless if you have added some information you care about in your ssh configuration file, it is safe to remove it and recreate it.
    cd
    rm .ssh/config
    touch .ssh/config
    
  2. Run the readyToLogin.py script to get information about logging in to nodes. The script has a lot of output so lets put that in a file so that we can easily search for the information we want. Use the same AMs as you used in Step 1.
    readyToLogin.py -a <AM_nickname> <slicename> > login.out 2>&1
    
  3. Open the login.out file. You'll get a big chunk of information, but you're interested in the ssh configuration info information near the end.
    ... <lots of output> ...
    ================================================================================
    SSH CONFIGURATION INFO for User inki
    ================================================================================
     
    Host ganel.gpolab.bbn.com
      Port 22
      HostName ganel.gpolab.bbn.com
      User pgenigpolabbbncom_testdeflect 
      IdentityFile /home/nriga/.ssh/geni_key 
    
    Host sardis.gpolab.bbn.com
      Port 22
      HostName sardis.gpolab.bbn.com
      User pgenigpolabbbncom_testdeflect 
      IdentityFile /home/nriga/.ssh/geni_key 
    
    
    ...<more output>...
    
  4. Copy all the above information and paste it into your .ssh/config file, and do the same for all the AMs, then you can very easily login into your nodes, just by using the name that is after the Host attribute. Your ~/.ssh/config file should look like
    IdentityFile /home/geni/.ssh/geni_key
    
    Host ganel.gpolab.bbn.com
      Port 22
      HostName ganel.gpolab.bbn.com
      User pgenigpolabbbncom_testdeflect 
      IdentityFile /home/nriga/.ssh/geni_key 
     
    Host sardis.gpolab.bbn.com
      Port 22
      HostName sardis.gpolab.bbn.com
      User pgenigpolabbbncom_testdeflect 
      IdentityFile /home/nriga/.ssh/geni_key 
    
  5. Login to your hosts. For each one of the myPlc hosts, open a new terminal and login to each one of them. Substitute <pl_hostname> with the hostnames of pl nodes you were given on the paper slip.
    ssh <pl_hostname>
    

3. Run your Experiment

The Mesoscale deployment can offer different topologies for communicating between hosts. We have provisioned two different IP subnets each one using a different topology.

The two subnets that have been provisioned are 10.42.112.0/24 on 3715 and 10.42.113.0/24 on 3716.

First of all let's see how we can figure out the IP of the hosts we reserved :

  1. List all the interfaces on your host. You will see that there are many interfaces of the form eth1.XXXX
    /sbin/ifconfig
    
    Part of the output would look like :
    eth1.1750:42147 Link encap:Ethernet  HWaddr 00:B0:D0:E1:6F:78  
           inet addr:10.42.147.90  Bcast:10.42.147.255  Mask:255.255.255.0
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
    
    eth1.1750:42148 Link encap:Ethernet  HWaddr 00:B0:D0:E1:6F:78  
           inet addr:10.42.148.90  Bcast:10.42.148.255  Mask:255.255.255.0
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
    
  2. Figure out the last octet of the IP address of your hosts. The way these MyPLC hosts are setup, is that they have multiple subinterfaces, each configured to be part of a different IP subnet. Almost all subnets in the hosts are in the form of 10.42.Y.0/24. You will notice that for all these subnets, your host has the same last octet. In the example above all the subinterfaces of eth1, will have an IP address that will end in 90 (10.42.147.90, 10.42.148.90)
  3. Ping over 3715. After logging in to your hosts, ping from host1 to host2. Assuming that host2 has a last octet of YYY you should:
    ping 10.42.112.YYY
    
    Notice the RTT on the packets.
  4. Ping over 3716. After logging in to your hosts, ping from one host2 to host1. Assuming that host1 has XXX as it's last octet you should ping
    ping 10.42.113.XXX
    
  5. Notice the RTT on the packets and compare it with the above ping.
  6. Look at the maps of 3715 and 3716 and locate your hosts. Is the result you are seing above reasonable? Which topology has the longer path between your nodes?

7. Clean up

Congratulations! You're done with this exercise, please release your resources before moving on so they'll be available to others. Make sure you delete all the resources in all AMs you used when you reserved resources in Step 1.

omni.py deletesliver -a <AM_nickname> <slicename>