Version 2 (modified by 15 years ago) (diff) | ,
---|
Mid-Atlantic Crossroads Project Status Report
Period: 4Q08
I. Major accomplishments
A. Milestones achieved
None, although a proof of concept was achieved as noted below.
B. Deliverables made
- Responded to John Jacob’s several requests for information, list of network and server resources, as well as helping populate the GENI trac wiki pages for our project
- Agreed to Aaron Falk’s request to co-chair the Substrate WG
- After some learning curve issues with this distro, CentOS 5.2 was successfully installed and is now running as a vserver on host planetlab1.dragon.maxgigapop.net
- Several tweaks were necessary to sshd, namely:
- Edit /etc/ssh/sshd_config, change ListenAddress to the address of eth0 (would block any guest's attempt to run sshd if you do not do this)
- Change Port 22 to Port 52108 (not required, but we like to run sshd on non-standard ports)
- Edit /etc/sysconfig/iptables, change port 22 to 52108 to allow incoming SSH on port 52108
- The entire Fedora and CentOS distributions are mirrored so that vserver instantiation is particularly fast.
- myplc-native software package that includes everything needed to manage and slice PlanetLab nodes was installed to run inside a vserver on planetlab1.dragon.maxgigapop.net. We will call that vserver max-myplc.dragon.maxgigapop.net.
- The only way to access the max-myplc vserver is to SSH to planetlab1.dragon.maxgigapop.net and run vserver max-myplc enter. An SSH server was added on the max-myplc vserver so we can SSH directly to it
- The 'is_public' variable in the 'planetlab4' postgresql database was set to 't' (true) to allow new users to be created.
- By logging into MyPLC web interface, clicking "Add Nodes" and filling out the form, we now currently have two nodes up and running -- planetlab2.dragon.maxgigapop.net and planetlab3.dragon.maxgigapop.net.
- With this environment setup it is now possible to create "slices" for end users with an allocation of some percentage of CPU time and some chunk of disk space by logging into the MyPLC web interface and clicking on "Create Slice, filling out the form with meaningful values for name/URL/description. Nodes and users are associated with SSH keys copied to the vserver.
- We have found two ways to add a tagged 802.1Q VLAN interface to a vserver. Normally this would be done using something like vconfig add eth1 3000 and configured with an IP address by running ifconfig eth1.3000 inet A.B.C.D. However, this method will not work for vservers.
- during vserver instantiation
- With the latest releases (sometime around 12/15/2008), you can pass --interface ethX.YYYY directly to the vserver build command. For example, this could be passed into the vtest-init-vserver.sh script:
./vtest-init-vserver.sh [...] -- --interface eth0:10.1.1.1/24 --interface eth1.3000:10.2.2.1/24 --hostname [...] 2. modifying an existing vserver
- Within the root context:
[root@planetlab1 interfaces]# vconfig add eth1 1234 Added VLAN with VID == 1234 to IF -:eth1:- [root@planetlab1 interfaces]# mkdir /etc/vservers/test-vlan/interfaces/2 [root@planetlab1 interfaces]# echo eth1.1234 > /etc/vservers/test-vlan/interfaces/2/dev [root@planetlab1 interfaces]# echo 10.30.30.1 > /etc/vservers/test-vlan/interfaces/2/ip [root@planetlab1 interfaces]# echo 24 > /etc/vservers/test-vlan/interfaces/2/prefix [root@planetlab1 interfaces]# vserver test-vlan restart [...]
- With the latest releases (sometime around 12/15/2008), you can pass --interface ethX.YYYY directly to the vserver build command. For example, this could be passed into the vtest-init-vserver.sh script:
- during vserver instantiation
Now the vserver has a tagged VLAN interface:
[root@planetlab1 interfaces]# vserver test-vlan enter bash-3.2# ifconfig eth1.1234 eth1.1234 Link encap:Ethernet HWaddr 00:30:48:9A:D3:89
inet addr:10.30.30.1 Bcast:10.30.30.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:35 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:8286 (8.0 KiB)
We believe this to be a significant step in the right and needed direction to achieve our deliverable goals. In discussions with the vserver developer, VLANs work by using IP isolation. It is unclear at this point, if it is possible to add/remove these interfaces while a vserver is running, but that is something to be further explored during the next quarter.
II. Description of work performed during last quarter
A. Activities and findings
Primary efforts this quarter focused on the following MAX deliverable:
- Extend DRAGON’s open-source GMPLS-based control plane implementation to include edge compute resources and support network virtualization;
To summarize the particulars listed in the above section, we now believe it is possible to demonstrate the potential for DRAGON functionality to be linked with PlanetLab by the following steps:
- setup a circuit using DRAGON between two edge ports (connected to PlanetLab nodes)
- setup PlanetLab slices using MyPLC webpage to provision compute resources
- once both the network/compute resources are available, manually login to the individual PlanetLab nodes and activate the tagged VLAN manually
B. Project participants
Chris Tracy, Jarda Flidr, and Peter O’Neil
C. Publications (individual and organizational)
None
D. Outreach activities
E. Collaborations
- Attended GEC3 in Palo Alto, presented project overview, and met individually and collectively with other Cluster B participants
- Engaged with USC ISI-East in Arlington, VA to bring up PlanetLab node and begin debugging early efforts to reserve Gigabit slices of bandwidth between ISI and MAX PlanetLab nodes.
- Participated in coordination calls with Cluster B participants
- Pursuing discussions with John Turner about possibly locating one of his SPP nodes at the McLean, VA Level 3 PoP where Internet2, MAX, and NLR all have suites and tie-fibers. Explained the differences between the I2 Infinera and Ciena equipment and that I2 is offering a 10G wave off the Infinera which will require an switch with multiple 10G ports to enable multi-point communication
- Began discussions with IU’s Meta-Operations Center on a sensible set of operational data they can start collecting for initial services like Emergency Shutdown of misbehaving slices on GENI and some kind of interface into GENI for a GENI-wide view of available systems.
- Met with Bob Kahn at CNRI to explore use of the Handle System
F. Other Contributions
- Presented an overview of GENI and MAX’s role in Cluster B to early December MAX All-Hands Meeting attended by ~50 people and to mid-December meeting of the Mid-Atlantic Terascale Partnership (MATP) meeting of 15 CIOs from Virginia.
- Created a diagram to summarize our intentions and goals: https://geni.maxgigapop.net/twiki/pub/GENI/Publications/planetlab_dragon_integration.pdf