Version 1 (modified by, 14 years ago) (diff)


Cluster D conference call on Friday, June 18, 2010


Ilia Baldine (RENCI)
Anirban Mandal
Chris Heermann

Harry Mussman (GPO)

Mike Zink (UMass)
Emmanuel Cecchet
Brian Lynn
David Irwin

Denise Gurkan (Univ Houston)

Michael Wang (Columbia)

Onjin (Oklahoma)

Wenjie (Ohio State)

Hongwei (Wayne State)

Rudra Dutta (NCState)


1) Taking stock of known unresolved problems:
+ Brian Lynn: Still has issues with pre-mature close
+ Ilia: Study more.
+ David: Eucalyptus cluster at UMass; need to connect via new clearinghouse, before GEC8 + Ilia: Have clearinghouse up by next week. + David: Needs to check new broker, explain issues with policy and make request.

2) New networking driver structure
+ Denise: Shade working on driver for 3400 + Ilia: New document on drivers, quite detailed.

3) GEC8 demos
+ DICLOUD: yes; ticket + ViSE: yes; no ticket + DOME: no + IMF: yes + ORCA/BEN:

  • OpenFlow demo RENCI and GIST Korea
  • Programmable router, on multiple site hosts, practice for GEC9

+ Kansei: yes; un cross-site demo, sensor nodes and latency through + OKGEM: yes, possibly controllable robot + LEARN: Integration of 3400 into LEARN, at Univ Houston and Rice

4) GENI Solictation 3:
+ Ilia: Considering GENI racks

  • Many Cluster D sites
  • Each site with separate network connectivity
  • Spaces in lab or university data center?
  • Connectivity via many (10+) VLANs?
  • Need good connectivity into sites

+ David: Space, power, cooling requirements for each rack?

  • Ilia: 19" full height rack
  • GENI enabled open-flow switches

+ Ilia: Purchased by project, sent out to each site

  • Agreement with manufacturer, who sends it out.
  • Harry: See current WiMAX mesoscale project

+ Ilia: Two CFs: ORCA and protoGENI Can be accessed by either

  • Local interface, but also managed by central site (e.g., RENCI)

+ Ilia: How many sites? 40 racks too many for level of funding

  • Perhaps 15 sites, each with one or two racks
  • Need sites on the west coast

+ Ilia: Connectivity to backbone costs

  • Campus?
  • Regional networks?
  • Perhaps: FrameNet wherever possible, otherwise Starlight

Last call A) Connectivity to Wayne State and Ohio State
+ Joe: Ready to go from Starlight to Merit racks;
+ Hongwei: Merit to OARnet to Ohio State; also Merit to Wayne State
+ Ilia: Working with MAX on interoperability
+ Ilia: working toward translation from protoGENI to NDL

  • Harry: Srikanth also working on topic

Last call B) GEC9 demo proposals, accepted:
a) Joe (Northwestern): iGENI, HP and Ericsson: video transcoding on clouds over dynamically provisioned networks. Prototype at GEC8. HP wants to include OpenFlow.
b) David (UMass): Now-casting (short-term weather forcasting), GEC7 demo plus more computation
c) Jeff Chase (Duke): Dynamically stand-up ASs with custom click routers