[[PageOutline]] = GPO Lab Reference Implementation for Installation of MyPLC = == Purpose == The purpose of this document is to provide a reference implementation for the installation of MyPLC-based !PlanetLab deployment at GENI sites. The contents of this document are based on experiences of setting up MyPLC-based !PlanetLab deployments at the GPO. Following this document exactly should give you a working MyPLC-based !PlanetLab deployment that supports the GENI AM API using the SFA software; however, this document is intended only to be an example configuration. == Scope == This document is intended for GENI site operators who would like to perform a fresh install of MyPLC with SFA, or for those who have MyPLC deployed and would like to upgrade SFA. == Variables == A few variables will be set according to your specific situation. GPO lab values for these variables are listed below for reference.[[BR]][[BR]] '''Important note:''' When choosing values for login_base variables ( and ), you must choose the value carefully. After setting these values, it is strongly recommended not to change them. Furthermore, it is known that including a dash or an underscore in the values of the login_base variables can cause problems. For these reasons, the GPO recommends that you choose an ''alphanumeric'' value for the login_base variables.[[BR]][[BR]] '''Important note:''' When setting values for the PLC variable PLC_DB_HOST and the SFA variable SFA_PLC_DB_HOST, unless you ''explicity'' modify the PLC configuration to not manage DNS, you must set PLC_DB_HOST to localhost.localdomain. This tutorial assumes that you will use localhost.localdomain for the value of PLC_DB_HOST. || '''Variable''' || '''GPO Values''' || '''Description''' || '''Important Notes''' || || || Fedora 8 || The OS which you are installing MyPLC on || || || || planetlab-f8-i386-5.0 || The MyPLC distribution which you are using, comprised of base OS, architecture, and PLC version || || || || http://build.planet-lab.org/planetlab/f8/planetlab-f8-i386-5.0-rc14/RPMS || URL of the MyPLC repository which you will be using || || || || root@gpolab.bbn.com || MyPLC application's initial administrative user of a MyPLC instance || Do not use a plus character here, or sfa_import.py will fail later || || || || MyPLC application's password for || || || || plc-admin@myplc.gpolab.bbn.com || Email address for MyPLC-generated support emails || || || || myplc.gpolab.bbn.com || URL or IP address of MyPLC web server, API server, and boot server || || || || GPO lab DNS server 1 IP address || IP address of DNS server 1 || || || || GPO lab DNS server 2 IP address || IP address of DNS server 2 || || || || myplc.gpolab.bbn.com || Full name of public site || || || || myplc || Abbreviated name of public site || || || || gpolab || Prefix for usernames of !PlanetLab slices || This variable should '''not''' be changed after you set it.[[BR]]We recommend using only alphanumeric strings as values for the login base. || || || planetlab-f8-i386 || Deployment string containing base OS and architecture information || || || || 42.3905 || Latitude of machine hosting MyPLC site || This should be in double format with no quotes surrounding it || || || -71.1474 || Longitude of machine hosting MyPLC site || This should be in double format with no quotes surrounding it || || || sfa-geni-gec9 || Latest recommended stable tag of SFA software in the git repository || || If you choose to use the [#PathB:TwoSites Two Site configuration], then you will need to define the following variables in addition to the variables above. More information on single site vs. two site configuraiton can be seen [#Step3:ConfiguringMyPLCDefaultSite here]. || '''Variable''' || '''Description''' || '''Important Notes''' || || || Name for your MyPLC instance and default site || || || || Abbreviated name for your MyPLC instance || || || || The prefix for usernames associated with all slices at a site || This variable should '''not''' be changed after you set it.[[BR]]We recommend using only alphanumeric strings as values for the login base. || = Installing MyPLC from Scratch = == Step 1: Install the Base OS == The !PlanetLab team also currently supports Fedora 8 and Fedora 12 as the base OS for MyPLCs. If you choose to use a different Fedora distribution from the distribution that the GPO lab uses, some of the following steps in this section may not apply. === SELinux === Edit {{{/etc/selinux/config}}} (as root) to set {{{SELINUX=disabled}}}. You will need to reboot the machine for this change to occur. === Firewall === Below are important firewall considerations: * TCP port 80 access is needed access to the MyPLC web interface and the PLC API * TCP port 443 access is needed access to the MyPLC web interface and the PLC API * Needed by !PlanetLab nodes * Useful for MyPLC administrators * Can be used by experimenters * TCP port 22 access is important for MyPLC administrators * TCP port 12346 access is need for slice creation through the SFA server == Step 2: Install MyPLC == At this point, you need to choose your MyPLC distribution. You should choose one that matches your base OS chosen in #Step1:InstalltheBaseOS and your architecture. It is important to choose a release candidate as a repository. More information on choosing a repository can be found at [https://svn.planet-lab.org/wiki/MyPLCUserGuide#NoteonversionsLocatingabuild]. 1. Add your MyPLC repository {{{ sudo sh -c 'echo "[myplc]" > /etc/yum.repos.d/myplc.repo' sudo sh -c 'echo "name= MyPLC" >> /etc/yum.repos.d/myplc.repo' sudo sh -c 'echo "baseurl=" >> /etc/yum.repos.d/myplc.repo' sudo sh -c 'echo "enabled=1" >> /etc/yum.repos.d/myplc.repo' sudo sh -c 'echo "gpgcheck=0" >> /etc/yum.repos.d/myplc.repo' }}} 2. Install MyPLC: {{{ sudo yum install myplc }}} == Step 3: Configuring MyPLC Default Site == There are two paths you can take in terms of setting up your MyPLC sites. * Path A: set up one single site for both management of MyPLC and management of !PlanetLab nodes. * Path B: * Let PLC create a default site for administrators to manage PLC * Manually create another site for managing !PlanetLab nodes. A full explanation on these two choices can be found at [https://svn.planet-lab.org/wiki/MyPLCUserGuide#CreatingasiteandPIaccount]. GPO currently follows path A and only uses one site on its MyPLC machines; however this increases the amount of work that one must do to maintain the site. Below, both paths are outlined. === Path A: One Single Site === ==== A Part 1: Configuring the MyPLC Site ==== Run the plc-config-tty program to configure PLC: {{{ sudo plc-config-tty }}} In plc-config-tty: * Enter 'u' to make "usual" changes. Change the following settings (leave the others as they are): * PLC_NAME : * PLC_SHORTNAME : * PLC_SLICE_PREFIX : * PLC_ROOT_USER : * PLC_ROOT_PASSWORD : * PLC_MAIL_ENABLED : [false] true * PLC_MAIL_SUPPORT_ADDRESS : * PLC_DB_HOST : localhost.localdomain * PLC_API_HOST : * PLC_WWW_HOST : * PLC_BOOT_HOST : * PLC_NET_DNS1 : * PLC_NET_DNS2 : * Enter command (u for usual changes, w to save, ? for help) w * Enter command (u for usual changes, w to save, ? for help) q 2. Start plc: {{{ sudo service plc start }}} 3. Obtain the database password generated by PLC: {{{ sudo plc-config-tty }}} In plc-config-tty: * enter 's PLC_DB_PASSWORD' to display the PLC DB password, and note it down (SFA will need this later). ==== A Part 2: Setting the site as public ==== Every time the plc service gets restarted (e.g. on boot), the site will be set as private. The site that controls the !PlanetLab nodes must be public for experimenters to use it. ''Contact gpo-infra@geni.net if you'd like the workaround we use to automate this.'' Set the default site as public: {{{ $ sudo plcsh >>> UpdateSite(' Central', {'is_public': True}) >>> exit }}} You also need to set up the correct latitude and longitude for this site, as it will be used by monitoring services for map overlays and visualization: {{{ $ sudo plcsh >>> UpdateSite(' Central', {'latitude': , 'longitude': }) >>> exit }}} === Path B: Two Sites === ==== B Part 1: Configuring MyPLC Default Site ==== 1. Run the plc-config-tty program to configure PLC: {{{ sudo plc-config-tty }}} In plc-config-tty: * Enter 'u' to make "usual" changes. Change the following settings (leave the others as they are): * PLC_NAME : * PLC_SHORTNAME : * PLC_SLICE_PREFIX : * PLC_ROOT_USER : * PLC_ROOT_PASSWORD : * PLC_MAIL_ENABLED : [false] true * PLC_MAIL_SUPPORT_ADDRESS : * PLC_DB_HOST : localhost.localdomain * PLC_API_HOST : * PLC_WWW_HOST : * PLC_BOOT_HOST : * PLC_NET_DNS1 : * PLC_NET_DNS2 : * Enter command (u for usual changes, w to save, ? for help) w * Enter command (u for usual changes, w to save, ? for help) q 2. Start plc: {{{ sudo service plc start }}} 3. Obtain the database password generated by PLC: {{{ sudo plc-config-tty }}} In plc-config-tty: * enter 's PLC_DB_PASSWORD' to display the PLC DB password, and note it down (SFA will need this later). ==== B Part 2: Create and Configure MyPLC Public Site ==== You now need to create a site for this MyPLC instance where your nodes and slices are managed. Instructions on how to do this through the web interface can be found at [https://svn.planet-lab.org/wiki/MyPLCUserGuide#CreatingasiteandPIaccount]. Please specify the actual latitude and longitude of the site, because existing monitoring services use this information for map overlay and visualization purposes. When filling out the web form, you should use the following information: * Site name: * Login base: * Abbreviated name: * URL: Doesn't matter * Latitude: * Longitude: * PI First Name: * PI Last Name: * PI Title: Doesn't matter * PI Phone: Doesn't matter * PI Email: (this will be used for username) * PI Password: Again, once you file the registration, next steps can be found at [https://svn.planet-lab.org/wiki/MyPLCUserGuide#CreatingasiteandPIaccount]. Don't forget to upload your public keys for these new users. == Step 4: Create Nodes == === Configure the Nodes === Add the node's primary interface and configure the node through MyPLC the web interface or using plcsh. For information on creating nodes through the web interface, see [https://svn.planet-lab.org/wiki/MyPLCUserGuide#Installingnodes]. Below is an example of how to configure a node with static interfaces: ==== Variables ==== || '''Variable''' || '''Description''' || '''Important Notes''' || || || Fully qualified domain name of the node || || || || IP address of primary DNS server for this interface || || || || IP address of secondary DNS server for this interface || || || || Subnet ID for this interface || || || || Netmask for this interface || || || || Gateway for this interface || || || || IP address for this interface || || ==== Steps ==== 1. Determine your : {{{ ls /var/www/html/boot/ | grep bootstrapfs-*.tar.bz2 }}} The output will include: {{{ bootstrapfs-.tar.bz2 }}} 2. Open plcsh: {{{ sudo plcsh }}} 3. Type in the following commands in plcsh to configure the node: {{{ newnode={} newnode["boot_state"]="reinstall" newnode["model"]="Custom" newnode["deployment"]="" newnode["hostname"]="" AddNode("",newnode) }}} 4. Type the following commands in plcsh to configure the interface: {{{ newinterface={} newinterface["network"]="" newinterface["is_primary"]=True newinterface["dns1"]="" newinterface["dns2"]="" newinterface["mac"]="" newinterface["netmask"]="" newinterface["gateway"]="" newinterface["broadcast"]="" newinterface["ip"]="" newinterface["method"]="static" newinterface["type"]="ipv4" AddInterface("",newinterface) }}} 5. If desired, add other interfaces: {{{ newinterface={} newinterface["network"]="" newinterface["is_primary"]=False newinterface["dns1"]="" newinterface["dns2"]="" newinterface["mac"]="" newinterface["netmask"]="" newinterface["gateway"]="" newinterface["broadcast"]="" newinterface["ip"]="" newinterface["method"]="static" newinterface["type"]="ipv4" AddInterface("",newinterface) }}} 6. Exit from plcsh: {{{ exit }}} === Obtain the Node's Boot image === 1. From the node page, change the Download pulldown menu to "Download ISO image for ". This will take you to a download screen. 2. Click "Download ISO image" === Boot the Node === Boot the node from the boot media you just downloaded, and verify that the MyPLC web interface shows that the node is in boot state. === Important Notes on !PlanetLab Node Interfaces === If you have used =http://build.planet-lab.org/planetlab/f8/planetlab-f8-i386-5.0-rc14/RPMS, then you will need to downgrade your `util-vserver-pl` package: * Version packaged with this repository: util-vserver-pl-0.3.31.planetlab * Target version: util-vserver-pl-0.3-17.planetlab * If you cannot find this RPM, please contact gpo-infra@geni.net * Install: {{{ rpm -Uv --force util-vserver-pl-0.3-17.planetlab.i386.rpm }}} * Reboot the node to cause the changes to take effect = Adding SFA to MyPLC = == Step 0: Preparing to Upgrade SFA == This step is only for those who were already running an older version of SFA, including RPM-based versions sfa-0.9-14 or earlier, and want to update their SFA versions. Prepare SFA for an upgrade: {{{ sudo /etc/init.d/sfa stop sudo sfa-nuke-plc.py sudo rm /etc/sfa/trusted_roots/*.gid sudo rm -rf /var/lib/sfa/ }}} == Step 1: Install SFA == From some machine that has git installed, do the following: 1. Get a tarball of the tag of SFA: {{{ git clone git://git.planet-lab.org/sfa.git gittag= cd sfa git archive --format=tar --prefix=${gittag}/ ${gittag} | gzip > ${gittag}.tar.gz }}} Copy the tarball over to the MyPLC machine, and from there do the following: 2. Install SFA prerequisites: {{{ sudo yum update fedora-release sudo yum install m2crypto python-dateutil python-psycopg2 myplc-config pyOpenSSL python-ZSI libxslt-python xmlsec1-openssl-devel python-lxml sudo yum upgrade pyOpenSSL python-lxml }}} 3. Compile SFA code on the MyPLC machine: {{{ mkdir ~/src cd ~/src tar xvzf ~/.tar.gz cd make }}} Expect about 6 lines of output and no obvious errors. 4. Install SFA: {{{ sudo make install }}} == Step 2: Configure SFA == 1. Configure SFA using the {{{sfa-config-tty}}} command: {{{ $ sudo sfa-config-tty }}} * Enter command (u for usual changes, w to save, ? for help) u * SFA_INTERFACE_HRN: plc. * SFA_REGISTRY_ROOT_AUTH: plc * SFA_REGISTRY_HOST : * SFA_AGGREGATE_HOST : * SFA_SM_HOST : * SFA_PLC_USER: * SFA_PLC_PASSWORD: * SFA_PLC_DB_HOST : localhost.localdomain * SFA_PLC_DB_USER : postgres * SFA_PLC_DB_PASSWORD: * SFA_PLC_URL : [https://localhost:443/PLCAPI/] * Enter command (u for usual changes, w to save, ? for help) w * Enter command (u for usual changes, w to save, ? for help) q 2. Start up SFA once, to create the initial /etc/sfa/sfa_config.py, and stop it again {{{ sudo service sfa reload }}} 3. Import the PLC database into SFA: {{{ sudo sfa-import-plc.py }}} 4. Start up SFA again: {{{ sudo service sfa restart }}} = Additional Features Used in the GPO Lab = == Trust a Remote Slice Authority == === Variables === || '''Variable''' || '''GPO Values''' || '''Description''' || '''Important Notes''' || || || pgeni.gpolab.bbn.com || FQDN of the site whose certificate you are adding || || || || http://www.pgeni.gpolab.bbn.com/ca-cert/pgeni.gpolab.bbn.com.pem || Source URL to download the certificate || || Get a copy of the certificate, and store it as a `.crt` file: {{{ wget -O .crt }}} Copy that `.crt` file into `/etc/sfa/trusted_roots`: {{{ sudo cp .crt /etc/sfa/trusted_roots/ }}} Restart sfa: {{{ sudo service sfa restart }}} === Reference === As an example, below is the process to configure your SFA instance to allow slivers to slices created at the GPO Lab slice authority, `pgeni.gpolab.bbn.com`. {{{ wget http://www.pgeni.gpolab.bbn.com/ca-cert/pgeni.gpolab.bbn.com.pem sudo cp pgeni.gpolab.bbn.com.pem /etc/sfa/trusted_roots/pgeni.gpolab.bbn.com.crt sudo service sfa restart }}} == Set Up Database Vacuum == === Description === Postgresql databases are supposed to be vacuumed on a regular basis, however MyPLC does not set this up for you. On GPO lab MyPLC machines, we are currently vacuuming of the database on a daily basis, and running full vacuums on a monthly basis. === Variables === || '''Variable''' || '''GPO Values''' || '''Description''' || '''Important Notes''' || || || postgres || Username of the owner of the postgresql database || It is best to use the owner of the postgresql database instead of the owner of the planetlab5 database || || || planetlab5 || Name of the database that needs vacuuming || || === Reference === For reference, below are the commands we use for this. Vacuum: {{{ /usr/bin/vacuumdb --username --analyze }}} Full Vacuum: {{{ /usr/bin/vacuumdb --username --analyze --full }}} == Report Slice Expiration to Experimenters == === Description === MyPLC comes with a script that will notify users of expiring slices via email, but this script is not running by default. The script is located on MyPLC machines under `/etc/support-scripts/renew_reminder.py`. On GPO MyPLC machines, we use a cron job to run this script once a day. This script has a companion `renew_reminder_logrotate` configuration which you may want to add to `/etc/logrotate.d`. === Variables === || '''Variable''' || '''GPO Values''' || '''Description''' || '''Important Notes''' || || || 4 || When to start notifying slice owners in terms of number of days before the slice expires || || === Reference === For reference, below are the commands we use for this. {{{ python /etc/support-scripts/renew_reminder.py --expires }}} == Setup Fast Sliver Creation == === Description === For each node, newly requested slivers slivers are created within roughly 15 minutes by default. You can shorten this time significantly by modifying the options passed into the node manager daemon by putitng options in `/etc/sysconfig/nodemanager` on each node. === Variables === || '''Variable''' || '''GPO Values''' || '''Description''' || '''Important Notes''' || || || 30 || The base value of the frequency at which node manager runs (in seconds) || || || || 15 || Upper bound to randomly generated splay range (in seconds) || || === Reference === For reference, below is the contents of our `/etc/sysconfig/nodemanager`: {{{ OPTIONS="-p -r -d" }}} Increasing the frequency at which node manager creates new slices will result in the MyPLC httpd ssl logs on the increasing in size at a much faster rate. Make sure to administer log rotation accordingly. The configuration file can be found at `/etc/logrotate.d/httpd`. For reference, below is the GPO lab's `/etc/logrotate.d/httpd` configuration for MyPLC machines is listed below: {{{ /var/log/httpd/*log { compresscmd /usr/bin/bzip2 compressext .bz2 compress daily rotate 35 missingok notifempty sharedscripts postrotate /bin/kill -HUP `cat /var/run/httpd.pid 2>/dev/null` 2> /dev/null || true endscript } }}} == Install Static ARP Tables for Experimental Topology == === Description === OpenFlow-based experimental topologies (see [wiki:OpenFlow/CampusTopology] for example topologies) often need hardcoded ARP entries on participating hosts. If your MyPLC PlanetLab hosts participate in such a topology, you may need to add static ARP entries. === Variables === || '''Variable''' || '''GPO Values''' || '''Description''' || '''Important Notes''' || || `