Version 29 (modified by 13 years ago) (diff) | ,
---|
- GPO Lab Reference Implementation for Installation of MyPLC
- Installing MyPLC from Scratch
- Adding SFA to MyPLC
- Additional Features Used in the GPO Lab
GPO Lab Reference Implementation for Installation of MyPLC
Purpose
The purpose of this document is to provide a reference implementation for the installation of MyPLC-based PlanetLab deployment at GENI sites. The contents of this document are based on experiences of setting up MyPLC-based PlanetLab deployments at the GPO. Following this document exactly should give you a working MyPLC-based PlanetLab deployment that supports the GENI AM API using the SFA software; however, this document is intended only to be an example configuration.
Scope
This document is intended for GENI site operators who would like to perform a fresh install of MyPLC with SFA, or for those who have MyPLC deployed and would like to upgrade SFA.
Variables
A few variables will be set according to your specific situation. GPO lab values for these variables are listed below for reference.
Important note: When choosing values for login_base variables (<mgmt_login_base> and <public_login_base>), you must choose the value carefully. After setting these values, it is strongly recommended not to change them. Furthermore, it is known that including a dash or an underscore in the values of the login_base variables can cause problems. For these reasons, the GPO recommends that you choose an alphanumeric value for the login_base variables.
Important note: When setting values for the PLC variable PLC_DB_HOST and the SFA variable SFA_PLC_DB_HOST, unless you explicity modify the PLC configuration to not manage DNS, you must set PLC_DB_HOST to localhost.localdomain. This tutorial assumes that you will use localhost.localdomain for the value of PLC_DB_HOST.
Variable | GPO Values | Description | Important Notes |
<base_os> | Fedora 8 | The OS which you are installing MyPLC on | |
<myplc_distribution> | planetlab-f8-i386-5.0 | The MyPLC distribution which you are using, comprised of base OS, architecture, and PLC version | |
<myplc_baseurl> | http://build.planet-lab.org/planetlab/f8/pl-f8-i386-5.0-k32-rc-20/RPMS/ | URL of the MyPLC repository which you will be using | |
<myplc_root_user> | root@gpolab.bbn.com | MyPLC application's initial administrative user of a MyPLC instance | Do not use a plus character here, or sfa_import.py will fail later |
<myplc_root_password> | MyPLC application's password for <myplc_root_user> | ||
<myplc_support_email> | plc-admin@myplc.gpolab.bbn.com | Email address for MyPLC-generated support emails | |
<myplc_host> | myplc.gpolab.bbn.com | URL or IP address of MyPLC web server, API server, and boot server | |
<myplc_dns1> | GPO lab DNS server 1 IP address | IP address of DNS server 1 | |
<myplc_dns2> | GPO lab DNS server 2 IP address | IP address of DNS server 2 | |
<public_site_name> | myplc.gpolab.bbn.com | Full name of public site | |
<public_site_shortname> | myplc | Abbreviated name of public site | |
<public_login_base> | gpolab | Prefix for usernames of PlanetLab slices | This variable should not be changed after you set it. We recommend using only alphanumeric strings as values for the login base. |
<myplc_deployment> | planetlab-f8-i386 | Deployment string containing base OS and architecture information | |
<site_latitude> | 42.3905 | Latitude of machine hosting MyPLC site | This should be in double format with no quotes surrounding it |
<site_longitude> | -71.1474 | Longitude of machine hosting MyPLC site | This should be in double format with no quotes surrounding it |
<sfa_git_tag> | Latest recommended stable tag of SFA software in the git repository | We currently install sfa-1.0-35-PATCHED from a tarball that is hosted on the planet-lab svn server, not from git |
If you choose to use the Two Site configuration, then you will need to define the following variables in addition to the variables above. More information on single site vs. two site configuraiton can be seen here.
Variable | Description | Important Notes |
<myplc_name> | Name for your MyPLC instance and default site | |
<myplc_shortname> | Abbreviated name for your MyPLC instance | |
<mgmt_login_base> | The prefix for usernames associated with all slices at a site | This variable should not be changed after you set it. We recommend using only alphanumeric strings as values for the login base. |
Installing MyPLC from Scratch
Step 1: Install the Base OS
The PlanetLab team also currently supports Fedora 8 and Fedora 12 as the base OS for MyPLCs. If you choose to use a different Fedora distribution from the distribution that the GPO lab uses, some of the following steps in this section may not apply.
SELinux
Edit /etc/selinux/config
(as root) to set SELINUX=disabled
. You will need to reboot the machine for this change to occur.
Firewall
Below are important firewall considerations:
- TCP port 80 access is needed access to the MyPLC web interface and the PLC API
- TCP port 443 access is needed access to the MyPLC web interface and the PLC API
- Needed by PlanetLab nodes
- Useful for MyPLC administrators
- Can be used by experimenters
- TCP port 22 access is important for MyPLC administrators
- TCP port 12346 access is need for slice creation through the SFA server
Step 2: Install MyPLC
At this point, you need to choose your MyPLC distribution. You should choose one that matches your base OS chosen in #Step1:InstalltheBaseOS and your architecture. It is important to choose a release candidate as a repository. More information on choosing a repository can be found at https://svn.planet-lab.org/wiki/MyPLCUserGuide#NoteonversionsLocatingabuild.
- Add your MyPLC repository
sudo sh -c 'echo "[myplc]" > /etc/yum.repos.d/myplc.repo' sudo sh -c 'echo "name= MyPLC" >> /etc/yum.repos.d/myplc.repo' sudo sh -c 'echo "baseurl=<myplc_baseurl>" >> /etc/yum.repos.d/myplc.repo' sudo sh -c 'echo "enabled=1" >> /etc/yum.repos.d/myplc.repo' sudo sh -c 'echo "gpgcheck=0" >> /etc/yum.repos.d/myplc.repo'
- Update fedora-release:
sudo yum update fedora-release
- Install MyPLC:
sudo yum install myplc
Step 3: Configuring MyPLC Default Site
There are two paths you can take in terms of setting up your MyPLC sites.
- Path A: set up one single site for both management of MyPLC and management of PlanetLab nodes.
- Path B:
- Let PLC create a default site for administrators to manage PLC
- Manually create another site for managing PlanetLab nodes.
A full explanation on these two choices can be found at https://svn.planet-lab.org/wiki/MyPLCUserGuide#CreatingasiteandPIaccount. GPO currently follows path A and only uses one site on its MyPLC machines; however this increases the amount of work that one must do to maintain the site. Below, both paths are outlined.
Path A: One Single Site
A Part 1: Configuring the MyPLC Site
Run the plc-config-tty program to configure PLC:
sudo plc-config-tty
In plc-config-tty:
- Enter 'u' to make "usual" changes. Change the following settings (leave the others as they are):
- PLC_NAME : <public_site_name>
- PLC_SHORTNAME : <public_site_shortname>
- PLC_SLICE_PREFIX : <public_login_base>
- PLC_ROOT_USER : <myplc_root_user>
- PLC_ROOT_PASSWORD : <myplc_root_password>
- PLC_MAIL_ENABLED : [false] true
- PLC_MAIL_SUPPORT_ADDRESS : <myplc_support_email>
- PLC_DB_HOST : localhost.localdomain
- PLC_API_HOST : <myplc_host>
- PLC_WWW_HOST : <myplc_host>
- PLC_BOOT_HOST : <myplc_host>
- PLC_NET_DNS1 : <myplc_dns1>
- PLC_NET_DNS2 : <myplc_dns2>
- Enter command (u for usual changes, w to save, ? for help) w
- Enter command (u for usual changes, w to save, ? for help) q
- Start plc:
sudo service plc start
- Obtain the database password generated by PLC:
sudo plc-config-tty
In plc-config-tty:
- enter 's PLC_DB_PASSWORD' to display the PLC DB password, and note it down (SFA will need this later).
A Part 2: Setting the site as public
Every time the plc service gets restarted (e.g. on boot), the site will be set as private. The site that controls the PlanetLab nodes must be public for experimenters to use it. Contact gpo-infra@geni.net if you'd like the workaround we use to automate this.
Set the default site as public:
$ sudo plcsh >>> UpdateSite('<public_login_base> Central', {'is_public': True}) >>> exit
You also need to set up the correct latitude and longitude for this site, as it will be used by monitoring services for map overlays and visualization:
$ sudo plcsh >>> UpdateSite('<public_login_base> Central', {'latitude': <site_latitude>, 'longitude': <site_longitude>}) >>> exit
Path B: Two Sites
B Part 1: Configuring MyPLC Default Site
- Run the plc-config-tty program to configure PLC:
sudo plc-config-tty
In plc-config-tty:
- Enter 'u' to make "usual" changes. Change the following settings (leave the others as they are):
- PLC_NAME : <myplc_name>
- PLC_SHORTNAME : <myplc_shortname>
- PLC_SLICE_PREFIX : <mgmt_login_base>
- PLC_ROOT_USER : <myplc_root_user>
- PLC_ROOT_PASSWORD : <myplc_root_password>
- PLC_MAIL_ENABLED : [false] true
- PLC_MAIL_SUPPORT_ADDRESS : <myplc_support_email>
- PLC_DB_HOST : localhost.localdomain
- PLC_API_HOST : <myplc_host>
- PLC_WWW_HOST : <myplc_host>
- PLC_BOOT_HOST : <myplc_host>
- PLC_NET_DNS1 : <myplc_dns1>
- PLC_NET_DNS2 : <myplc_dns2>
- Enter command (u for usual changes, w to save, ? for help) w
- Enter command (u for usual changes, w to save, ? for help) q
- Start plc:
sudo service plc start
- Obtain the database password generated by PLC:
sudo plc-config-tty
In plc-config-tty:
- enter 's PLC_DB_PASSWORD' to display the PLC DB password, and note it down (SFA will need this later).
B Part 2: Create and Configure MyPLC Public Site
You now need to create a site for this MyPLC instance where your nodes and slices are managed. Instructions on how to do this through the web interface can be found at https://svn.planet-lab.org/wiki/MyPLCUserGuide#CreatingasiteandPIaccount. Please specify the actual latitude and longitude of the site, because existing monitoring services use this information for map overlay and visualization purposes.
When filling out the web form, you should use the following information:
- Site name: <public_site_name>
- Login base: <public_login_base>
- Abbreviated name: <public_site_shortname>
- URL: Doesn't matter
- Latitude: <site_latitude>
- Longitude: <site_longitude>
- PI First Name: <Admin's first name>
- PI Last Name: <Admin's last name>
- PI Title: Doesn't matter
- PI Phone: Doesn't matter
- PI Email: <Admin's email address> (this will be used for username)
- PI Password: <Admin password>
Again, once you file the registration, next steps can be found at https://svn.planet-lab.org/wiki/MyPLCUserGuide#CreatingasiteandPIaccount. Don't forget to upload your public keys for these new users.
Step 4: Create Nodes
Configure the Nodes
Add the node's primary interface and configure the node through MyPLC the web interface or using plcsh. For information on creating nodes through the web interface, see https://svn.planet-lab.org/wiki/MyPLCUserGuide#Installingnodes.
Below is an example of how to configure a node with static interfaces:
Variables
Variable | Description | Important Notes |
<node_fqdn> | Fully qualified domain name of the node | |
<if_dns1> | IP address of primary DNS server for this interface | |
<if_dns2> | IP address of secondary DNS server for this interface | |
<if_subnet_id> | Subnet ID for this interface | |
<if_netmask> | Netmask for this interface | |
<if_gateway> | Gateway for this interface | |
<if_ipaddr> | IP address for this interface |
Steps
- Determine your <myplc_deployment>:
ls /var/www/html/boot/ | grep bootstrapfs
The output will include:
bootstrapfs-<myplc_deployment>.tar.bz2
- Open plcsh:
sudo plcsh
- Type in the following commands in plcsh to configure the node:
newnode={} newnode["boot_state"]="reinstall" newnode["model"]="Custom" newnode["deployment"]="<myplc_deployment>" newnode["hostname"]="<node_fqdn>" AddNode("<public_login_base>",newnode)
- Type the following commands in plcsh to configure the interface:
newinterface={} newinterface["network"]="<node_network>" newinterface["is_primary"]=True newinterface["dns1"]="<node_dns1>" newinterface["dns2"]="<node_dns2>" newinterface["mac"]="" newinterface["netmask"]="<node_netmask>" newinterface["gateway"]="<node_gateway>" newinterface["broadcast"]="<node_broadcast>" newinterface["ip"]="<node_ipaddr>" newinterface["method"]="static" newinterface["type"]="ipv4" AddInterface("<node_fqdn>",newinterface)
- If desired, add other interfaces:
newinterface={} newinterface["network"]="<node_network>" newinterface["is_primary"]=False newinterface["dns1"]="<node_dns1>" newinterface["dns2"]="<node_dns2>" newinterface["mac"]="" newinterface["netmask"]="<node_netmask>" newinterface["gateway"]="<node_gateway>" newinterface["broadcast"]="<node_broadcast>" newinterface["ip"]="<node_ipaddr>" newinterface["method"]="static" newinterface["type"]="ipv4" AddInterface("<node_fqdn>",newinterface)
- Exit from plcsh:
exit
Obtain the Node's Boot image
- From the node page, change the Download pulldown menu to "Download ISO image for <node_fqdn>". This will take you to a download screen.
- Click "Download ISO image"
Boot the Node
Boot the node from the boot media you just downloaded, and verify that the MyPLC web interface shows that the node is in boot state.
Important Notes on PlanetLab Node Interfaces
If you have not used <base_url>=http://build.planet-lab.org/planetlab/f8/planetlab-f8-i386-5.0-rc14/RPMS, the you can skip to the next section.
If you have used <base_url>=http://build.planet-lab.org/planetlab/f8/planetlab-f8-i386-5.0-rc14/RPMS, then you will need to downgrade your util-vserver-pl
package:
- Version packaged with this repository: util-vserver-pl-0.3.31.planetlab
- Target version: util-vserver-pl-0.3-17.planetlab
- If you cannot find this RPM, please contact gpo-infra@geni.net
- Install:
rpm -Uv --force util-vserver-pl-0.3-17.planetlab.i386.rpm
- Reboot the node to cause the changes to take effect
Adding SFA to MyPLC
Step 0: Preparing to Upgrade SFA
This step is only for those who were already running an older version of SFA, including RPM-based versions sfa-0.9-14 or earlier, and want to update their SFA versions.
Prepare SFA for an upgrade:
sudo /etc/init.d/sfa stop sudo sfa-nuke-plc.py sudo rm /etc/sfa/trusted_roots/*.gid sudo rm -rf /var/lib/sfa/
Step 1: Obtain SFA Source
Option A: Install from git
This is the preferred option for installing SFA for any version other than sfa-1.0-35-PATCHED
. From some machine that has git installed, do the following:
- Get a tarball of the <sfa_git_tag> tag of SFA:
git clone git://git.planet-lab.org/sfa.git gittag=<sfa_git_tag> cd sfa git archive --format=tar --prefix=${gittag}/ ${gittag} | gzip > ${gittag}.tar.gz
Option B: Install from tarball
This is the preferred option for installing SFA for sfa-1.0-35-PATCHED
.
wget -O ~/sfa-1.0-35-PATCHED.tar.gz https://svn.planet-lab.org/raw-attachment/wiki/SFA-1.0-35-PATCHED/sfa-1.0-35-PATCHED.tar.gz
Step 2: Install SFA
- Copy the tarball over to the MyPLC machine if it is not there already, and from there do the following:
- Install SFA prerequisites:
sudo yum update fedora-release sudo yum install m2crypto python-dateutil python-psycopg2 myplc-config pyOpenSSL python-ZSI libxslt-python xmlsec1-openssl-devel python-lxml sudo yum upgrade pyOpenSSL python-lxml
- Compile SFA code on the MyPLC machine:
mkdir ~/src cd ~/src tar xvzf ~/<sfa_git_tag>.tar.gz cd <sfa_git_tag> make
Expect about 6 lines of output and no obvious errors.
- Install SFA:
sudo make install
Step 2: Configure SFA
- Configure SFA using the
sfa-config-tty
command:$ sudo sfa-config-tty
- Enter command (u for usual changes, w to save, ? for help) u
- SFA_INTERFACE_HRN: plc.<public_login_base>
- SFA_REGISTRY_ROOT_AUTH: plc
- SFA_REGISTRY_HOST : <myplc_host>
- SFA_AGGREGATE_HOST : <myplc_host>
- SFA_SM_HOST : <myplc_host>
- SFA_PLC_USER: <myplc_root_user>
- SFA_PLC_PASSWORD: <myplc_root_password>
- SFA_PLC_DB_HOST : localhost.localdomain
- SFA_PLC_DB_USER : postgres
- SFA_PLC_DB_PASSWORD: <myplc_db_password>
- SFA_PLC_URL : https://localhost:443/PLCAPI/
- Enter command (u for usual changes, w to save, ? for help) w
- Enter command (u for usual changes, w to save, ? for help) q
- Enter command (u for usual changes, w to save, ? for help) u
- Start up SFA once, to create the initial /etc/sfa/sfa_config.py, and stop it again
sudo service sfa reload
- Import the PLC database into SFA:
sudo sfa-import-plc.py
- Start up SFA again:
sudo service sfa restart
Additional Features Used in the GPO Lab
Trust a Remote Slice Authority
Variables
Variable | GPO Values | Description | Important Notes |
<cert_fqdn> | pgeni.gpolab.bbn.com | FQDN of the site whose certificate you are adding | |
<cert_url> | http://www.pgeni.gpolab.bbn.com/ca-cert/pgeni.gpolab.bbn.com.pem | Source URL to download the certificate |
Get a copy of the certificate, and store it as a .gid
file:
wget -O <cert_fqdn>.gid <cert_url>
Copy that .gid
file into /etc/sfa/trusted_roots
:
sudo cp <cert_fqdn>.gid /etc/sfa/trusted_roots/
Restart sfa:
sudo service sfa restart
Reference
As an example, below is the process to configure your SFA instance to allow slivers to slices created at the GPO Lab slice authority, pgeni.gpolab.bbn.com
.
wget http://www.pgeni.gpolab.bbn.com/ca-cert/pgeni.gpolab.bbn.com.pem sudo cp pgeni.gpolab.bbn.com.pem /etc/sfa/trusted_roots/pgeni.gpolab.bbn.com.gid sudo service sfa restart
Set Up Database Vacuum
Description
Postgresql databases are supposed to be vacuumed on a regular basis, however MyPLC does not set this up for you. On GPO lab MyPLC machines, we are currently vacuuming of the database on a daily basis, and running full vacuums on a monthly basis.
Variables
Variable | GPO Values | Description | Important Notes |
<username> | postgres | Username of the owner of the postgresql database | It is best to use the owner of the postgresql database instead of the owner of the planetlab5 database |
<database_name> | planetlab5 | Name of the database that needs vacuuming |
Reference
For reference, below are the commands we use for this.
Vacuum:
/usr/bin/vacuumdb --username <username> --analyze <database_name>
Full Vacuum:
/usr/bin/vacuumdb --username <username> --analyze --full <database_name>
Report Slice Expiration to Experimenters
Description
MyPLC comes with a script that will notify users of expiring slices via email, but this script is not running by default. The script is located on MyPLC machines under /etc/support-scripts/renew_reminder.py
. On GPO MyPLC machines, we use a cron job to run this script once a day. This script has a companion renew_reminder_logrotate
configuration which you may want to add to /etc/logrotate.d
.
Variables
Variable | GPO Values | Description | Important Notes |
<expires> | 4 | When to start notifying slice owners in terms of number of days before the slice expires |
Reference
For reference, below are the commands we use for this.
python /etc/support-scripts/renew_reminder.py --expires <expires>
Setup Fast Sliver Creation
Description
For each node, newly requested slivers slivers are created within roughly 15 minutes by default. You can shorten this time significantly by modifying the options passed into the node manager daemon by putitng options in /etc/sysconfig/nodemanager
on each node.
Variables
Variable | GPO Values | Description | Important Notes |
<period> | 30 | The base value of the frequency at which node manager runs (in seconds) | |
<random> | 15 | Upper bound to randomly generated splay range (in seconds) |
Reference
For reference, below is the contents of our /etc/sysconfig/nodemanager
:
OPTIONS="-p <period> -r <random> -d"
Increasing the frequency at which node manager creates new slices will result in the MyPLC httpd ssl logs on the increasing in size at a much faster rate. Make sure to administer log rotation accordingly. The configuration file can be found at /etc/logrotate.d/httpd
.
For reference, below is the GPO lab's /etc/logrotate.d/httpd
configuration for MyPLC machines is listed below:
/var/log/httpd/*log { compresscmd /usr/bin/bzip2 compressext .bz2 compress daily rotate 35 missingok notifempty sharedscripts postrotate /bin/kill -HUP `cat /var/run/httpd.pid 2>/dev/null` 2> /dev/null || true endscript }
Configure Secondary Interfaces for Experimental Topology
Description
In order for experimenters to use a multisite MyPLC topology, e.g. one which is connected to the OpenFlow network core, the MyPLC planetlab nodes must be configured with secondary interfaces whose IPs can communicate with each other. This section documents a particular configuration, for the HelloGENI experimental setup.
These steps will have limited relevance if you are not participating in a setup such as Hello GENI, and if you are not implementing the campus topology for experimenters described in OpenFlow/CampusTopology.
Variables
Variables whose names start with plnode_
are different for each participating planetlab node on your MyPLC.
Variable | GPO Values | Description | Important Notes |
<of_vlan> | 1750 | An OpenFlow-controlled VLAN used to implement Port-based physical VLAN translation in your environment | |
<plnode_fqdn> | ganel.gpolab.bbn.com | the FQDN of each planetlab node on your MyPLC | |
<plnode_eth1_macaddr> | 00:1b:21:4b:3f:ad | the MAC address of the eth1 dataplane interface on each planetlab node on your MyPLC | |
<plnode_exp_lastoct> | 51 | the last octet of the dataplane IP addresses assigned to this node, according to HelloGENI |
Reference
Follow the steps in this reference to install and configure the plifconfig
tool for PLC configuration of secondary interfaces, then to use this tool to install VLAN-based subinterfaces on each node.
- Follow the steps in GpoLab/MyplcNodeDataplaneInterfaces on the myplc and on each node to configure your environment to allow PLC to manage VLAN-based subinterfaces using the plifconfig utility.
- For each planetlab node in your environment, use plifconfig (on myplc) to configure the eth1 subinterface IPs:
- Configure the base
eth1
interface:sudo plifconfig add -n <plnode_fqdn> -i 0.0.0.0 -m 255.255.255.255 -d eth1 --mac <plnode_eth1_macaddr>
- Configure the VLAN subinterface on
eth1
:sudo plifconfig add -n <plnode_fqdn> -i 0.0.0.0 -m 255.255.255.255 -d eth1 --vlan <of_vlan>
- Configure each of the 10 subinterface IPs which the Hello GENI topology uses:
for subnet in 101 102 103 104 105 106 107 108 109 110; do sudo plifconfig add -n <plnode_fqdn> -i 10.42.$subnet.<plnode_exp_lastoct> -m 255.255.255.0 -d eth1 --vlan <of_vlan> --alias 42$subnet done
- Verify that the configuration looks correct on the myplc. Here is sample output from a GPO lab node; yours should look similar, with the appropriate IPs and MAC addresses:
$ sudo plifconfig list -n ganel.gpolab.bbn.com Node: ganel.gpolab.bbn.com IP: 0.0.0.0 HWaddr: Device: eth1.1750 IP: 10.42.101.51 HWaddr: Device: eth1.1750:42101 IP: 0.0.0.0 HWaddr: 00:1b:21:4b:3f:ad Device: eth1 IP: 10.42.102.51 HWaddr: Device: eth1.1750:42102 IP: 10.17.10.51 HWaddr: Device: eth1.1710 IP: 10.42.105.51 HWaddr: Device: eth1.1750:42105 IP: 10.42.110.51 HWaddr: Device: eth1.1750:42110 IP: 10.42.106.51 HWaddr: Device: eth1.1750:42106 IP: 10.42.104.51 HWaddr: Device: eth1.1750:42104 IP: 10.42.109.51 HWaddr: Device: eth1.1750:42109 IP: 10.42.108.51 HWaddr: Device: eth1.1750:42108 IP: 10.42.107.51 HWaddr: Device: eth1.1750:42107 IP: 10.42.103.51 HWaddr: Device: eth1.1750:42103
- Verify that ifconfig on the node itself reports that all of these interfaces are up, and have IPs assigned.
- Configure the base
Install Static ARP Tables for Experimental Topology
Description
OpenFlow-based experimental topologies (see OpenFlow/CampusTopology for example topologies) often need hardcoded ARP entries on participating hosts. If your MyPLC PlanetLab hosts participate in such a topology, you may need to add static ARP entries.
Variables
Variable | GPO Values | Description | Important Notes |
<script> | /root/install_geni_core_arp_entries.sh | The script which is used to install the ARP entries | |
<arp_file> | /root/geni-core-arp.txt | Config file containing hostnames, MAC addresses, and IPs of hosts | |
<arp_url> | http://www.gpolab.bbn.com/arp/geni-core-arp.txt | Source URL containing the most recent version of <arp_file> |
Reference
To configure static ARP installation on your system, and to ensure ARP entries are installed after each system boot, follow these steps.
On each PlanetLab node in your MyPLC installation:
- Get the ARP installation shell script (e-mail
gpo-infra@geni.net
if you do not have this script), install it as<script>
, and make sure it is executable:sudo chown root <script> sudo chmod a+x <script>
- Download the most recent version of the ARP entries config file into
/root
, where<script>
expects it to be:sudo wget -O <arp_file> <arp_url>
- Run the script right away to seed the ARP entries. This is just:
sudo <script>
and you can verify that ARP entries have been added by looking at the arp table, which should now contain many entries:arp -a
- Configure the local system startup script to add ARP entries every time the system boots. GPO infra has an
/etc/rc.local
script which can be used to configure ARP entries after nodemanager successfully runs for the first time (and brings up secondary interfaces configured by PLC). Install this script as/etc/rc.d/rc.local
, and ensure it is executable:sudo chown root /etc/rc.d/rc.local sudo chmod a+x /etc/rc.d/rc.local
Set max duration for sliver renewal
Description
The aggregate operators can set a maximum time in SFA that a sliver can be renewed for into the future. This value is not an absolute limit on a sliver, but simply a relative limit on how long a sliver can be renewed into the future at a given time. For example, this value can be set to 30 days, and I could renew my sliver for 30 days in the future from today, and then renew my sliver for 30 days in the future again next week.
Variables
Variable | GPO Values | Description | Important Notes |
<max_slice_renew> | 180 | The maximum number of days in the future that a sliver can be renewed at a given time |
Reference
- On your MyPLC machine, set the value using the
sfa-config-tty
utility:[user@myplc:~]$sudo sfa-config-tty Enter command (u for usual changes, w to save, ? for help) e SFA_MAX_SLICE_RENEW == SFA_MAX_SLICE_RENEW : [60] 180 Enter command (u for usual changes, w to save, ? for help) w Wrote /etc/sfa/configs/site.xml Merged /etc/sfa/default_config.xml and /etc/sfa/configs/site.xml into /etc/sfa/sfa_config.xml You might want to type 'r' (restart sfa), 'R' (reload sfa) or 'q' (quit) Enter command (u for usual changes, w to save, ? for help) q
- Restart sfa manually:
sudo service sfa restart