wiki:PlasticSlices/Tools

Version 1 (modified by Josh Smift, 13 years ago) (diff)

--

For the Plastic Slices project, we used a variety of simplistic tools to make it easier to manage slices and experiments.

Many of the things we did here were to simplify the task of running ten slices simultaneously, but many of them are useful even if you're only running one slice.

FIXME: Much of this page is still under construction.

rspecs

For each slice, we kept a directory of rspecs for the slice in a Subversion repository. The MyPLC rspecs were the same for each slice, so we actually stored them in a 'misc' directory and then created symlinks pointing to them in the per-slice directories. The OpenFlow rspecs were different for each slice -- very similar to each other, but e.g. the IP subnet, the URL of the controller, etc, were different.

A copy of the plastic-101 and plastic-102 directories are here. (FIXME: Figure out the best way to put them here. GENI wiki SVN? Need to strip out passwords, so we can't have them actually live here, this is just a snapshot. Sub-pages?)

We could then use these directories as input to omni commands to operate on "all the slivers in a slice", conceptually, e.g. with

for rspec in ~/rspecs/reservation/$slicename/* ; do <something> ; done

such as the omni commands below.

omni

We used the command-line tool 'omni' to manage all of the slices and slivers.

somni

Short for "setup omni", we defined a bash function to set some variables for use by subsequent omni commands:

somni () { slicename=$1 ; rspec=$2 ; am=$(grep AM: $rspec | sed -e 's/^AM: //') ; }

See below for usage examples.

Creating slices

We used these loops to create the ten plastic-* slices, and renew them until August 4th:

for slicename in plastic-{101..110} ; do omni createslice $slicename ; done
for slicename in plastic-{101..110} ; do omni renewslice $slicename $(date +%Y%m%dT%H:%M:%S -d "August 4 15:00") ; done

Creating slivers

We then used the rspec directories to create all of the slivers in each slice:

for slicename in plastic-{101..110}
do
  for rspec in ~/rspecs/reservation/$slicename/*
  do
    somni $slicename $rspec
    omni -n -a $am createsliver $slicename $rspec
  done
done

Renewing slivers

We also used the rspec directories to renew all of the MyPLC slivers in each slice:

for slicename in plastic-{101..110}
do
  for rspec in ~/rspecs/reservation/$slicename/myplc-*rspec
    do somni $slicename $rspec
    omni -n -a $am renewsliver $slicename $(date +%Y%m%dT%H:%M:%S -d "August 4 15:00")
  done
done

Managing logins

Once we had our slivers, it was handy to have a way to run commands on all of the compute resources in parallel, or copy files to or from all of them. We did that by creating a file for each slice with the logins for the slivers in that slice, which we'd then feed as input to rsync or shmux.

Specify which logins to use

To do something with all the logins in a slice, we'd do:

logins=$(cat ~/plastic-slices/logins/logins-plastic-101.txt)

We'd sometimes use grep to use only a subset, e.g. all the ones at Clemson:

logins=$(grep -h clemson ~/plastic-slices/logins/logins-plastic-101.txt)

We often wanted to do something with the logins in multiple slices; for all logins on all slices, we'd do

logins=$(cat ~/plastic-slices/logins/logins-plastic-{101..110}.txt)

and for a subset (e.g. all the ones for Clemson), we'd do something like

logins=$(grep -h clemson ~/plastic-slices/logins/logins-plastic-{101..110}.txt)

You can also set $logins by hand, of course.

Run a command on all of those logins

We'd sometimes use a 'for' loop to run a command on each login, one at a time, like this one, which enables non-interactive sudo without a terminal (which is disabled by default):

for login in $logins ; do ssh -t $login sudo sed -i -e 's/!visiblepw/visiblepw/' /etc/sudoers ; done

More often, we used 'shmux' to run the same command on each login in parallel. We aliased 'shmux' to include some useful options:

alias shmux='shmux -Sall -m -B -T 15'

We then used it to enable cron (which we can do now that non-interactive sudo is enabled):

shmux -c 'sudo chkconfig crond on && sudo service crond start' $logins

And to install 'screen':

shmux -c 'sudo yum -y install screen' $logins

We installed a crontab for each login (after copying the file to each login, see below):

shmux -c 'crontab $HOME/.crontab' $logins

After we were done running experiments, we checked to make sure nothing unexpected had been left running:

shmux -c "ps -efwww | egrep -i -v '(grep|cron|PID|ping|ps)' || true" $logins

Transfer files to all of those logins

We copied up a common directory of dotfiles (including the crontab mentioned above):

for login in $logins ; do rsync -a ~/plastic-slices/dotfiles/ $login: && echo $login ; done

(We echo the login name after each one finishes just so we can tell that it's making progress.)

We also sometimes did this in parallel, by telling the for loop to run the commands in the background (replacing the final ; with an &):

for login in $logins ; do rsync -a ~/plastic-slices/dotfiles/ $login: && echo $login & done

If you had a login-specific directory of dotfiles, you could use that too:

for login in $logins ; do rsync -a ~/plastic-slices/dotfiles/$login $login: && echo $login & done

We also used this to copy files back from each login, such as to pull down the screen log from each login:

for login in $logins ; do echo "getting $login" ; rsync -a $login:screenlog.0 $login.log ; done

Running experiments

As of Baseline 5, we used two layers of 'screen' processes to run the experiments. First, we ran screen on a local machine for each slice, with a virtual terminal for each login. We had a .screenrc file for each slice to automate this. (FIXME: Include or link to these here.)

Then, on each login, we ran 'screen' with a single virtual terminal, so that (a) if the connection from the local system to the remote login got disconnected, we could log back in and reconnect to the screen process; (b) we could use screen's logging functionality to capture all the output from the experiment (and retrieve it later).

Launch them all

We used this to launch them, one at a time:

for slice in plastic-{101..110} ; do screen -S $slice -c ~/plastic-slices/screenrc/screenrc-$slice ; done

Detach from each after it launches, and the next will launch.

Connect to one

We could then connect to them, plastic-101 in this example:

screen -r plastic-101

The '-S $slice' in the initial launch command above is what enables this handy trick -- it's otherwise pretty hard to keep track of which 'screen' process corresponds to which slice.

Start screen running and logging on a remote plnode

We had an alias for this:

experiment-start

This was an alias in .bashrc:

alias experiment-start='sudo rm -f screenlog.0 ; sudo script -c screen /dev/null'

That oddness is necessary because 'screen' on a MyPLC plnode can't handle long usernames, so we needed to become root in order to launch it... But if you just do 'sudo screen', it complains "Cannot access '/dev/pts/0': No such file or directory", because you're in a VM on the plnode, so the 'script ... /dev/null' trick avoids that problem (by using the 'script' command to redirect output to /dev/null). And then,

We then had a .screenrc file containing

screen -L su - $SUDO_USER

on the remote plnode, because we didn't actually want to run the experiments as root, so we open a window running as the original user.

Reconnect to a screen session on a remote plnode

reconnect

This is an alias in .bashrc:

alias reconnect='sudo script -c "screen -dr" /dev/null'

which reconnects to one of the screens created earlier.

Dump hardcopy logs

In Baseline 4, we just ran a local 'screen' process, and used somewhat different .screenrc files, to create long scrollback buffers and then dump them to disk with screen's 'hardcopy' function.

This dumps the logs, removes zero-length logs (from screen windows that didn't exist), and removes the blank lines from the top and bottom of each file.

cd ~/plastic-slices/baseline-logs/baseline-4
for slice in plastic-{101..110} ; do for i in {0..13} ; do screen -S $slice -p $i -X hardcopy -h $slice-hardcopy-$i.log ; sleep 1 ; done ; done
for i in * ; do test -s $i || rm $i ; done
sed -i -e '/./,$!d' *
sed -i -e :a -e '/^\n*$/{$d;N;ba' -e '}' *