Version 1 (modified by Vic Thomas, 11 years ago) (diff)


Emulab differences from planetlab

  • Looks to be Xen-based instead of vserver-based
  • 100+ different types of OS images that can be instantiated - fedora, windows xp, ubuntu, planetlab, bsd, ...
  • A subclass of emulab nodes are nodes that are instantiated with planetlab images, so maybe we only support those nodes...
  • experiments can be composed of heterogenous types of OS images.
  • no /etc/slicename file, but we might be able to derive a slicename from the node's hostname ( --> projectName_experimentName)
  • where to get user keys from? Login appears to be done by name/password instead of ssh keys. Why then do we upload our SSH key when registering for emulab?
  • idle-swap. if an experiment is idle for 2 hours, it will be swapped out. A node is considered non-idle if:
     Any network activity on the experimental network 
     Substantial activity on the control network 
     TTY/console activity on nodes 
     High CPU activity on nodes 
     Certain external events, like rebooting a node with node_reboot

We have two problems -- 1) idle swap will prevent pacmand from running, and 2) running pacmand regularly will prevent idle swap. Problem 2 would solve problem 1 by preventing the idle swap, but probably put us at odds with acceptable use policy.

Emulab experiments do not seem to be long running by design, so maybe 1 isn't an issue. We'd then need to solve 2, to prevent pacmand from keeping an idle slice alive when it should be swapped. A possible solution would be a pacmand that never polls for updates. It could only be updated by pushing to it.


  • start by creating a certificate at, select profile and "generate ssl certificate".
  • log into using your name and password

  • verify your cert is in /users/<username>/encrypted.pem and /users/<username>/emulab.pem
  • put your passphrase in /users/<username>/passphrase
  • run some test scripts
      cd /usr/testbed/protogeni/test
      python ./ -p /users/smbaker/passphrase
      python ./ -n exampleslice -p /users/smbaker/passphrase
      python ./ -n exampleslice -p /users/smbaker/passphrase
      python ./ -n exampleslice -p /users/smbaker/passphrase

To-Do list

  • Start off by only supporting planetlab images under emulab. That would let us test protogeni without having to support an exhaustive list of different os types.
  • Support for different architectures in rpm packages. This is needed to support heterogenous OS types. Look at how RPM and YUM handle this. Maybe we can leverage our tag support.
  • Get the slicename from somewhere, probably from the node's hostname
  • Get user keys from somewhere

Two types of Protogeni nodes

  • Exclusive node
    • Whole machine is reserved to one user
    • Seems to reboot a lot
    • Login using SSH key and emulab username
    • Current runs RH9
    • mktemp problem in initscript, -t option is not understood (FIXABLE)
    • PYXML dependency issues (FIXABLE)
    • missing optparse (REALLY BAD)
    • python 2.2 (REALLY BAD)

Feasibility Writeup

ProtoGENI nodes can be classified as either exclusive nodes or as shared nodes. Exclusive nodes are currently running Red Hat 9, an old operating system, that lacks the appropriate version of python for running the raven tools. These nodes will have to be upgraded to a more modern distribution of Linux, such as Fedora 8, before raven can be used with them.

Shared ProtoGENI nodes are already running Fedora 8 and Raven is compatible with these nodes, although several features need to be provided for Raven to be fully supported.

  • First, Raven needs a way to identify the name (or HRN or URN) of the node and the slice in which it is operating on. For example, on PlanetLab/SFA nodes, this is done by providing /etc/node.gid and /etc/slice.gid files containing the GID of the node/slice (which in turn contains the URN and/or HRN). Similar files would need to be made available on ProtoGENI nodes.
  • Second, Raven needs a set of public keys for users that are authorized to use the slice. On PlanetLab/SFA nodes, this is done by making a local XMLRPC call to the nodemanager that runs on the node, asking it for the set of user public keys.
  • Third, a method needs to be provided for automatically installing the Raven tools on the nodes. On PlanetLab/SFA nodes, this is done by the user requesting that the Stork Initscript be used for their slice.