Version 1 (modified by 12 years ago) (diff) | ,
---|
R5) Add OMF/OML to WiMAX sites
Fraida Fund (NYU Poly), Manu Gosain (GPO), Derek Meyer (Wisconsin),
- Equip a WiMAX site with full OMF/OML capabilities, installing on VMs
- Include Login Service for Remote Users
- NYU Poly configuration:
- 1) physical host acting as base station controller, runs the asn-gw and wimaxrf AM.
- (1) and (2) run Ubuntu, (3) runs Windows.
- It has Ethernet connections to the NEC IDU, NYU-Poly network (and Internet), and very soon GENI backbone (campus IS just informed me that they will be testing that tomorrow).
- We have firewall rules set up so this host only accepts traffic from host (2) on the wimaxrf AM port. This allows us to make sure that only users with a current reservation can configure the BS.
- (1) and (2) run Ubuntu, (3) runs Windows.
- 2) physical host running OMF 5.3 AM, EC, and some other services that are useful for the testbed.
- (1) and (2) run Ubuntu, (3) runs Windows.
- It has Ethernet connections to NYU-Poly network (internet) and OMF control network.
- This is the host that testbed users log on to. We have firewall rules set up on this host so that only users with a current reservation can configure the BS or communicate with the testbed nodes.
- (1) and (2) run Ubuntu, (3) runs Windows.
- 3) physical host that serves our group website and the reservation system for the testbed. It's connected to the NYU-Poly network (and Internet)
- (1) and (2) run Ubuntu, (3) runs Windows.
- (1) and (2) run Ubuntu, (3) runs Windows.
- Our testbed nodes are scattered throughout several CS labs and research areas (not part of our group).
- They are connected to the pre-existing Ethernet jacks in the walls and floors of those rooms, which all go back to a CS server room, where we patch them through to our own (non-openflow) switch for the control network.
- We do have an openflow switch that we plan to deploy somewhere at some point.
- They are connected to the pre-existing Ethernet jacks in the walls and floors of those rooms, which all go back to a CS server room, where we patch them through to our own (non-openflow) switch for the control network.
- BBN experience (need summary)
- All services are running on the same physical machine.
- HP server that came with the base station.
- The base OS is Debian and virtual machines using for the services (Virtualbox)
- HP server that came with the base station.
- Physical machine network connections:
- Eth0 (128.105.22.xxx): University network (internet)
- Eth1 (10.3.8.126): Network to IDU of base station
- Eth2 (10.0.0.1): Control network for omf
- Eth3: Connected to openflow (physically, but not enabled until we can get a vlan tag to GENI backbone)
- Eth0 (128.105.22.xxx): University network (internet)
- Base operating system: Debian
- Virtualbox guests:
- 1. Ubuntu 9.04 -> aggmgr 5.2 -> wimaxrf service
- Eth0 (128.105.22.xxx): bridged interface to university network
- Eth1(10.3.8.254): bridged interface to base station network
- 1. Ubuntu 9.04 -> aggmgr 5.2 -> wimaxrf service
- 2. Ubuntu 10.04 -> aggmgr 5.3 -> cmcStub, Frisbee, inventory, pxe, result, saveimage
- Also expctl, resctl, and xmpp on same machine
- Eth0 (128.105.22.xxx): bridged interface to university network
- Eth1 (10.0.0.200): bridged interface to control network
- Coming: plan to federate with ProtoGENI cluster
- Reference: TridentCom 2012 : "Federating wired and wireless test facilities through Emulab and OMF: the iLab.t use case", Stefan Bouckaert (IBBT - Ghent University)
- BBN is exploring how to utilize this to better integrate WiMAX sites into GENI, and be able to utilize the GENI AM API to assgin resources.
- Reference: TridentCom 2012 : "Federating wired and wireless test facilities through Emulab and OMF: the iLab.t use case", Stefan Bouckaert (IBBT - Ghent University)