wiki:GpoLab/MyplcReferenceImplementation

Version 3 (modified by chaos@bbn.com, 9 years ago) (diff)

--

GPO Lab Reference Implementation for Installation of MyPLC

Purpose

The purpose of this document is to provide a reference implementation for the installation of MyPLC-based PlanetLab deployment at GENI sites. The contents of this document are based on experiences of setting up MyPLC-based PlanetLab deployments at the GPO. Following this document exactly should give you a working MyPLC-based PlanetLab deployment that supports the GENI AM API using the SFA software; however, this document is intended only to be an example configuration.

Scope

This document is intended for GENI site operators who would like to perform a fresh install of MyPLC with SFA, or for those who have MyPLC deployed and would like to upgrade SFA.

Variables

A few variables will be set according to your specific situation. GPO lab values for these variables are listed below for reference.

Variable GPO Values Description Important Notes
<base_os> Fedora 8 The OS which you are installing MyPLC on
<myplc_distribution> planetlab-f8-i386-5.0 The MyPLC distribution which you are using, comprised of base OS, architecture, and PLC version
<myplc_baseurl> http://build.planet-lab.org/planetlab/f8/planetlab-f8-i386-5.0-rc14/RPMS URL of the MyPLC repository which you will be using
<myplc_name> Name for your MyPLC instance and default site
<myplc_shortname> Abbreviated name for your MyPLC instance
<mgmt_login_base> The prefix for usernames associated with all slices at a site Do not use underscores here, or you'll get "PLC: Bootstrapping the database: [FAILED]" at PLC startup
<myplc_root_user> root@gpolab.bbn.com MyPLC application's initial administrative user of a MyPLC instance Do not use a plus character here, or sfa_import.py will fail later
<myplc_root_password> MyPLC application's password for <myplc_root_user>
<myplc_support_email> plc-admin@myplc.gpolab.bbn.com Email address for MyPLC-generated support emails
<myplc_www_host> myplc.gpolab.bbn.com URL or IP address of MyPLC web server
<myplc_api_host> myplc.gpolab.bbn.com URL or IP address of MyPLC API server
<myplc_db_host> localhost.localdomain URL or IP address of MyPLC database server
<myplc_boot_host> myplc.gpolab.bbn.com URL or IP address of MyPLC boot server
<myplc_dns1> GPO lab DNS server 1 IP address IP address of DNS server 1
<myplc_dns2> GPO lab DNS server 2 IP address IP address of DNS server 2
<public_site_name> myplc.gpolab.bbn.com Full name of public site
<public_site_shortname> myplc Abbreviated name of public site
<public_login_base> gpolab Prefix for usernames of PlanetLab slices
<myplc_deployment> planetlab-f8-i386 Deployment string containing base OS and architecture information
<sfa_git_tag> sfa-geni-gec9 Latest recommended stable tag of SFA software in the git repository

Installing MyPLC from Scratch

Step 1: Install the Base OS

The PlanetLab team also currently supports Fedora 8 and Fedora 12 as the base OS for MyPLCs. If you choose to use a different Fedora distribution from the distribution that the GPO lab uses, some of the following steps in this section may not apply.

SELinux

Edit /etc/selinux/config (as root) to set SELINUX=disabled. You will need to reboot the machine for this change to occur.

Firewall

Below are important firewall considerations:

  • TCP port 80 access is needed access to the MyPLC web interface and the PLC API
  • TCP port 443 access is needed access to the MyPLC web interface and the PLC API
    • Needed by PlanetLab nodes
    • Useful for MyPLC administrators
    • Can be used by experimenters
  • TCP port 22 access is important for MyPLC administrators
  • TCP port 12346 access is need for slice creation through the SFA server

Step 2: Install MyPLC

At this point, you need to choose your MyPLC distribution. You should choose one that matches your base OS chosen in #Step1:InstalltheBaseOS and your architecture. It is important to choose a release candidate as a repository. More information on choosing a repository can be found at https://svn.planet-lab.org/wiki/MyPLCUserGuide#NoteonversionsLocatingabuild.

  1. Add your MyPLC repository
    sudo sh -c 'echo "[myplc]" > /etc/yum.repos.d/myplc.repo'
    sudo sh -c 'echo "name= MyPLC" >> /etc/yum.repos.d/myplc.repo'
    sudo sh -c 'echo "baseurl=<myplc_baseurl>" >> /etc/yum.repos.d/myplc.repo'
    sudo sh -c 'echo "enabled=1" >> /etc/yum.repos.d/myplc.repo'
    sudo sh -c 'echo "gpgcheck=0" >> /etc/yum.repos.d/myplc.repo'
    
  1. Install MyPLC:
    sudo yum install myplc
    

Step 3: Configuring MyPLC Default Site

There are two paths you can take in terms of setting up your MyPLC sites.

  • Path A: set up one single site for both management of MyPLC and management of PlanetLab nodes.
  • Path B:
    • Let PLC create a default site for administrators to manage PLC
    • Manually create another site for managing PlanetLab nodes.

A full explanation on these two choices can be found at https://svn.planet-lab.org/wiki/MyPLCUserGuide#CreatingasiteandPIaccount. GPO currently follows path A and only uses one site on its MyPLC machines; however this increases the amount of work that one must do to maintain the site. Below, both paths are outlined.

Path A: One Single Site

A Part 1: Configuring the MyPLC Site

Run the plc-config-tty program to configure PLC:

sudo plc-config-tty

In plc-config-tty:

  • Enter 'u' to make "usual" changes. Change the following settings (leave the others as they are):
    • PLC_NAME : <public_site_name>
    • PLC_SHORTNAME : <public_site_shortname>
    • PLC_SLICE_PREFIX : <public_login_base>
    • PLC_ROOT_USER : <myplc_root_user>
    • PLC_ROOT_PASSWORD : <myplc_root_password>
    • PLC_MAIL_ENABLED : [false] true
    • PLC_MAIL_SUPPORT_ADDRESS : <myplc_support_email>
    • PLC_DB_HOST : <myplc_db_host>
    • PLC_API_HOST : <myplc_www_host>
    • PLC_WWW_HOST : <myplc_api_host>
    • PLC_BOOT_HOST : <myplc_boot_host>
    • PLC_NET_DNS1 : <myplc_dns1>
    • PLC_NET_DNS2 : <myplc_dns2>
  • Enter command (u for usual changes, w to save, ? for help) w
  • Enter command (u for usual changes, w to save, ? for help) q
  1. Start plc:
    sudo service plc start
    
  1. Obtain the database password generated by PLC:
    sudo plc-config-tty
    

In plc-config-tty:

  • enter 's PLC_DB_PASSWORD' to display the PLC DB password, and note it down (SFA will need this later).

A Part 2: Setting the site as public

Every time the plc service gets restarted (e.g. on boot), the site will be set as private. The site that controls the PlanetLab nodes must be public for experimenters to use it. Contact gpo-infra@geni.net if you'd like the workaround we use to automate this.

Set the default site as public:

$ sudo plcsh
>>> UpdateSite('<public_login_base> Central', {'is_public': True})
>>> exit

Path B: Two Sites

B Part 1: Configuring MyPLC Default Site

  1. Run the plc-config-tty program to configure PLC:
    sudo plc-config-tty
    

In plc-config-tty:

  • Enter 'u' to make "usual" changes. Change the following settings (leave the others as they are):
    • PLC_NAME : <myplc_name>
    • PLC_SHORTNAME : <myplc_shortname>
    • PLC_SLICE_PREFIX : <mgmt_login_base>
    • PLC_ROOT_USER : <myplc_root_user>
    • PLC_ROOT_PASSWORD : <myplc_root_password>
    • PLC_MAIL_ENABLED : [false] true
    • PLC_MAIL_SUPPORT_ADDRESS : <myplc_support_email>
    • PLC_DB_HOST : <myplc_db_host>
    • PLC_API_HOST : <myplc_www_host>
    • PLC_WWW_HOST : <myplc_api_host>
    • PLC_BOOT_HOST : <myplc_boot_host>
    • PLC_NET_DNS1 : <myplc_dns1>
    • PLC_NET_DNS2 : <myplc_dns2>
  • Enter command (u for usual changes, w to save, ? for help) w
  • Enter command (u for usual changes, w to save, ? for help) q
  1. Start plc:
    sudo service plc start
    
  1. Obtain the database password generated by PLC:
    sudo plc-config-tty
    

In plc-config-tty:

  • enter 's PLC_DB_PASSWORD' to display the PLC DB password, and note it down (SFA will need this later).

B Part 2: Create and Configure MyPLC Public Site

You now need to create a site for this MyPLC instance where your nodes and slices are managed. Instructions on how to do this through the web interface can be found at https://svn.planet-lab.org/wiki/MyPLCUserGuide#CreatingasiteandPIaccount.

When filling out the web form, you should use the following information:

  • Site name: <public_site_name>
  • Login base: <public_login_base>
  • Abbreviated name: <public_site_shortname>
  • URL: Doesn't matter
  • Latitude: Doesn't matter
  • Longitude: Doesn't matter
  • PI First Name: <Admin's first name>
  • PI Last Name: <Admin's last name>
  • PI Title: Doesn't matter
  • PI Phone: Doesn't matter
  • PI Email: <Admin's email address> (this will be used for username)
  • PI Password: <Admin password>

Again, once you file the registration, next steps can be found at https://svn.planet-lab.org/wiki/MyPLCUserGuide#CreatingasiteandPIaccount. Don't forget to upload your public keys for these new users.

Step 4: Create Nodes

Configure the Nodes

Add the node's primary interface and configure the node through MyPLC the web interface or using plcsh. For information on creating nodes through the web interface, see https://svn.planet-lab.org/wiki/MyPLCUserGuide#Installingnodes.

Below is an example of how to configure a node with static interfaces:

Variables

Variable Description Important Notes
<node_fqdn> Fully qualified domain name of the node
<if_dns1> IP address of primary DNS server for this interface
<if_dns2> IP address of secondary DNS server for this interface
<if_subnet_id> Subnet ID for this interface
<if_netmask> Netmask for this interface
<if_gateway> Gateway for this interface
<if_ipaddr> IP address for this interface

Steps

  1. Determine your <myplc_deployment>:
    ls /var/www/html/boot/ | grep bootstrapfs-*.tar.bz2
    

The output will include:

bootstrapfs-<myplc_deployment>.tar.bz2
  1. Open plcsh:
    sudo plcsh
    
  1. Type in the following commands in plcsh to configure the node:
    newnode={}
    newnode["boot_state"]="reinstall"
    newnode["model"]="Custom"
    newnode["deployment"]="<myplc_deployment>"
    newnode["hostname"]="<node_fqdn>"
    AddNode("<public_login_base>",newnode) 
    
  1. Type the following commands in plcsh to configure the interface:
    newinterface={}
    newinterface["network"]="<node_network>"
    newinterface["is_primary"]=True
    newinterface["dns1"]="<node_dns1>"
    newinterface["dns2"]="<node_dns2>"
    newinterface["mac"]=""
    newinterface["netmask"]="<node_netmask>"
    newinterface["gateway"]="<node_gateway>"
    newinterface["broadcast"]="<node_broadcast>"
    newinterface["ip"]="<node_ipaddr>"
    newinterface["method"]="static"
    newinterface["type"]="ipv4"
    AddInterface("<node_fqdn>",newinterface)
    
  1. If desired, add other interfaces:
    newinterface={}
    newinterface["network"]="<node_network>"
    newinterface["is_primary"]=False
    newinterface["dns1"]="<node_dns1>"
    newinterface["dns2"]="<node_dns2>"
    newinterface["mac"]=""
    newinterface["netmask"]="<node_netmask>"
    newinterface["gateway"]="<node_gateway>"
    newinterface["broadcast"]="<node_broadcast>"
    newinterface["ip"]="<node_ipaddr>"
    newinterface["method"]="static"
    newinterface["type"]="ipv4"
    AddInterface("<node_fqdn>",newinterface)
    
  1. Exit from plcsh:
    exit
    

Obtain the Node's Boot image

  1. From the node page, change the Download pulldown menu to "Download ISO image for <node_fqdn>". This will take you to a download screen.
  1. Click "Download ISO image"

Boot the Node

Boot the node from the boot media you just downloaded, and verify that the MyPLC web interface shows that the node is in boot state.

Important Notes on PlanetLab Node Interfaces

If you have used <base_url>=http://build.planet-lab.org/planetlab/f8/planetlab-f8-i386-5.0-rc14/RPMS, then you will need to downgrade your util-vserver-pl package:

  • Version packaged with this repository: util-vserver-pl-0.3.31.planetlab
  • Target version: util-vserver-pl-0.3-17.planetlab
  • If you cannot find this RPM, please contact gpo-infra@geni.net
  • Install:
    rpm -Uv --force util-vserver-pl-0.3-17.planetlab
    
  • Reboot the node to cause the changes to take effect

Adding SFA to MyPLC

Step 0: Preparing to Upgrade SFA

This step is only for those who were already running an older version of SFA, including RPM-based versions sfa-0.9-14 or earlier, and want to update their SFA versions.

Prepare SFA for an upgrade:

sudo /etc/init.d/sfa stop
sudo sfa-nuke-plc.py
sudo rm /etc/sfa/trusted_roots/*.gid
sudo rm -rf /var/lib/sfa/

Step 1: Install SFA

From some machine that has git installed, do the following:

  1. Get a tarball of the <sfa_git_tag> tag of SFA:
    git clone git://git.planet-lab.org/sfa.git
    gittag=<sfa_git_tag>
    cd sfa
    git archive --format=tar --prefix=${gittag}/ ${gittag} | gzip > ${gittag}.tar.gz
    

Copy the tarball over to the MyPLC machine, and from there do the following:

  1. Install SFA prerequisites:
    sudo yum update fedora-release
    sudo yum install m2crypto python-dateutil python-psycopg2 myplc-config pyOpenSSL python-ZSI libxslt-python xmlsec1-openssl-devel python-lxml
    sudo yum upgrade pyOpenSSL python-lxml
    
  1. Compile SFA code on the MyPLC machine:
    mkdir ~/src
    cd ~/src
    tar xvzf ~/<sfa_git_tag>.tar.gz
    cd <sfa_git_tag>
    make
    

Expect about 6 lines of output and no obvious errors.

  1. Install SFA:
    sudo make install
    

Step 2: Configure SFA

  1. Configure SFA using the sfa-config-tty command:
    $ sudo sfa-config-tty
    
    • Enter command (u for usual changes, w to save, ? for help) u
      • SFA_INTERFACE_HRN: plc.<public_login_base>
      • SFA_REGISTRY_ROOT_AUTH: plc
      • SFA_REGISTRY_HOST : <myplc_api_host>
      • SFA_AGGREGATE_HOST : <myplc_api_host>
      • SFA_SM_HOST : <myplc_api_host>
      • SFA_PLC_USER: <myplc_root_user>
      • SFA_PLC_PASSWORD: <myplc_root_password>
      • SFA_PLC_DB_HOST : <myplc_db_host>
      • SFA_PLC_DB_USER : postgres
      • SFA_PLC_DB_PASSWORD: <myplc_db_password>
      • SFA_PLC_URL : https://localhost:443/PLCAPI/
    • Enter command (u for usual changes, w to save, ? for help) w
    • Enter command (u for usual changes, w to save, ? for help) q
  1. Start up SFA once, to create the initial /etc/sfa/sfa_config.py, and stop it again
    sudo service sfa reload
    
  1. Import the PLC database into SFA:
    sudo sfa-import-plc.py 
    

  1. Start up SFA again:
    sudo service sfa restart
    

Additional Features Used in the GPO Lab

Trust a Remote Slice Authority

Variables

Variable GPO Values Description Important Notes
<cert_base> pgeni.gpolab.bbn.com Filename for the certificate (without file extension)
<cert_url> http://www.pgeni.gpolab.bbn.com/ca-cert Base URL for certificate

Get a copy of the certificate:

wget <cert_url>/<cert>.pem

Copy that certificate into a .crt file under /etc/sfa/trusted_roots:

sudo cp <cert>.pem /etc/sfa/trusted_roots/<cert>.crt

Restart sfa:

sudo service sfa restart

Reference

As an example, below is the process to configure your SFA instance to allow slivers to slices created at the GPO Lab slice authority, pgeni.gpolab.bbn.com.

wget http://www.pgeni.gpolab.bbn.com/ca-cert/pgeni.gpolab.bbn.com.pem
sudo cp pgeni.gpolab.bbn.com.pem /etc/sfa/trusted_roots/pgeni.gpolab.bbn.com.crt
sudo service sfa restart

Set Up Database Vacuum

Description

Postgresql databases are supposed to be vacuumed on a regular basis, however MyPLC does not set this up for you. On GPO lab MyPLC machines, we are currently vacuuming of the database on a daily basis, and running full vacuums on a monthly basis.

Variables

Variable GPO Values Description Important Notes
<username> postgres Username of the owner of the postgresql database It is best to use the owner of the postgresql database instead of the owner of the planetlab5 database
<database_name> planetlab5 Name of the database that needs vacuuming

Reference

For reference, below are the commands we use for this.

Vacuum:

/usr/bin/vacuumdb --username <username> --analyze <database_name>

Full Vacuum:

/usr/bin/vacuumdb --username <username> --analyze --full <database_name>

Report Slice Expiration to Experimenters

Description

MyPLC comes with a script that will notify users of expiring slices via email, but this script is not running by default. The script is located on MyPLC machines under /etc/support-scripts/renew_reminder.py. On GPO MyPLC machines, we use a cron job to run this script once a day. This script has a companion renew_reminder_logrotate configuration which you may want to add to /etc/logrotate.d.

Variables

Variable GPO Values Description Important Notes
<expires> 4 When to start notifying slice owners in terms of number of days before the slice expires

Reference

For reference, below are the commands we use for this.

python /etc/support-scripts/renew_reminder.py --expires <expires>

Setup Fast Sliver Creation

Description

For each node, newly requested slivers slivers are created within roughly 15 minutes by default. You can shorten this time significantly by modifying the options passed into the node manager daemon by putitng options in /etc/sysconfig/nm on each node.

Variables

Variable GPO Values Description Important Notes
<period> 30 The base value of the frequency at which node manager runs (in seconds)
<random> 15 Upper bound to randomly generated splay range (in seconds)

Reference

For reference, below is the contents of our /etc/sysconfig/nm:

OPTIONS="-p <period> -r <random> -d"

Increasing the frequency at which node manager creates new slices will result in the MyPLC httpd ssl logs on the increasing in size at a much faster rate. Make sure to administer log rotation accordingly.

For reference, below is the GPO lab httpd logrotate configuration for MyPLC machines:

/var/log/httpd/*log {
    compresscmd /usr/bin/bzip2
    compressext .bz2
    compress
    daily
    rotate 35
    missingok
    notifempty
    sharedscripts
    postrotate
        /bin/kill -HUP `cat /var/run/httpd.pid 2>/dev/null` 2> /dev/null || true
    endscript
}