Changes between Initial Version and Version 1 of NetworkRspecMiniWorkshopNotes

06/25/09 16:41:29 (10 years ago)
Christopher Small



  • NetworkRspecMiniWorkshopNotes

    v1 v1  
     1See for sides
     39:15 intro slides (Aaron)
     5High level motivation goals for spiral 1. See Aaron's slides. Demonstrate
     6end-to-end slices across representative samples of the major substrates and
     7technologies envisioned in GENI. Goal for each cluster is to demonstrate
     8end-to-end via your control framework.
     10This is what the GPO is paying you to do, this is what we want demonstrated
     11at the end of spiral 1
     13(John Turner) What is an "end"?
     15Loosely defined, but think from perspective of experimenter.
     17(Larry Peterson) Key bar is two or more aggregates sharing a packet?
     19No, aggregates, not in terms of apckets
     21(James Sterbenz) Not single slice across ''all'' of them in year 1, pairwise is
     24Only pairwise would be a dissapointment. Really want to show that it's
     25possible to use multiple aggregates, more than two. Minimal is two end nodes
     26and two aggregates, but that's really the absolute minimum.
     28For each cluster,
     30- How does a network device or aggregate reserve resources?
     32- How to network slivers join to form an end-to-end slice?
     34Nobody has articulated yet how they are going to do this.
     36In each cluster, what are the plans to support this, in each cluster, in
     37spiral 1?
     39(John Turner) When you talk about an end-to-end slice, do you have a prescription for
     40what the data path looks like?
     42You're in cluster B, a candidate slice would include internet2 vlans
     43connecting spp nodes and !GpENI and Stanford.
     45(John Turner) Sounds like internet2 vlans on an spp somehow ending up at stanford
     47GENI will always have a hodge-podge of connectivity. For now we'll have a set
     48of preconfigured fixed vlans in internet 2.
     50(John Turner) VLANS in I2 are being provided by Rob Ricci, not I2. I don't
     51want to pick on Stanford per se, but it's not clear how we're going to make
     52any connection happen.
     54(James Sterbenz) Also what sort of experiments will tie all of these things
     55together, given the disparate technologies
     57(Larry Peterson) this is what will come out of discussions today
     59(4) is raising two questions -- what capabilities will we have from backbones
     60in spiral 1
     62(Camilo Viecco) are any clusters also using NLR?
     64(Guido) there is potentially more than one I2 backbone
     66(Rob Ricci) the ones we're handing out today are between individual sites. I2
     67is providing a 10Gb wavelength and we're putting ethernet switches on top of
     68that wave. We'll be making VLANs on that wave without any I2 involvement.
     70(Ivan Seskar) also an issue when particular locations will be connected to the
     73There is a ''general'' problem here. Let's not get lost in the weeds.
     75(Larry Peterson) Simple observation. As diverse as these technologies are, we
     76have IP addresses for all of them. We can fall back to using IP (tunnels) for
     79End users can access GENI experimentation this way. But we have set as a goal
     80non-IP, not layered over IP connectivity for GENI.
     82(Rob Ricci) Maybe we should do with !!GpENI -- get a fiber from the local I2 POP
     83to our campus.
     85(John Turner) From my experience the I2 folks will push back not make this easy
     87Yes, we've got some things to work out here. However, to be concrete, those
     88who have direct connectivity in sprial 1 will be expected to demonstrate the
     89ability to stitch together VLANs.
     91(Ivan Seskar) Can GPO be more involved with these discussions with I2 / NLR?
     93Yes, we have full time staff who would be happy to help work things out with
     94I2, NLR, etc.
     96(Ilya) can we organize and get some shared stimulus money to wire up campuses?
     98(Chip) My understanding is that every campus is supposed to make a single
     99proposal to NSF.
     1019:35 Network Cofiguration use case slides (Aaron)
     103Sliver creation. First makes reservations of stuff around the edge, but now
     104needs to interconnect aggregates. (Assumption is that physical connectivity
     105between these aggregates exists.) Then researcher passes rspec requesting
     106VLANs between aggregates, then asks for the topology to be set up.
     108- do we need a standard method to describe these network coordinates, or are
     109  they just blobs?
     111- does it go into the rspec?
     113- are there now constraints on the order in which networks can be added to a
     114  slice?
     116- how does it work with multiple networks in a series?
     118- how are ordering costraints handled in the control framework?
     120- how will tunnels work?
     122This is the discussion for this afternoon. How do we describe resources is
     123this morning.
     125(John Turner) let's put this in as concrete terms as possible, I have a very difficult
     126time connecting your abstract diagrams with my cluster or any other cluster.
     128We need to figure out what people need to do to support this by the fall.
     129This is an engineering, not research, discussion. Different answers from
     130different clusters is OK, different answers from single cluster smells
     131funny; you need to explain how it'll work. Throwing code over the transom
     132probably won't cut it. Needs to be collective cluster ownership of this
     133goal. Entire cluster is going to be evaluated on getting this to work.
     1359:45 Enterprise GENI view of the world, (Rob Sherwood)
     137!OpenFlow overview
     139!FlowVisor mosly feature complete, publicly released.
     141Aggregate manager: resoruce discovery, reports to CH as rspec, accepts
     142reservations, converts rspec to flowvisor config.
     144Clearinghouse: implemented toy version for testing.
     146E-GENI rspec: switches, interfaces, "flowspace", opt-in, inter-aggregate
     149(Chip) have you thought about measurement yet?
     151Built into openflow -- byte and packet counters. With a controller you can
     152redirect flows through a measurement box.
     154(Guido) We haven't thought it through in full detail, but you get a fair amount
     155of control from !OpenFlow, can look deep into a packet.
     157We don't really have nodes in a traditional sense, have a datapath ID (i.e.
     158MAC addr off switch), list of interfaces. We don't "log into" a switch.
     160(Guido) as soon as you reserve a switch, the switch connects back to the URL of
     161your controller and the switch starts asking your controller for
     164(Camilo Viecco) Do you have one user at a time, or multiple users?
     166You have one user at a time. Default rule is if we don't have a rule for a
     167packet, a message gets sent to the controller.
     169(Guido) If you connect to something that is not part of your aggregate, it's
     170represented differently. This describes internal references.
     172Can think of !FlowSpace as header, "field=value" pairs plus actions. Packet
     173classifier built-in. Header fields (ip_src, ip_dst, ethertype, etc.).
     174Actions are allow, deny, listen-only.
     176Ex: all web traffic, except to main server:
     178    ip_src = tcp_dport=80 :: DENY
     180    ip_src=1.2.3/24 tcp_dport=80 :: ALLOW
     182(Guido) can say "ipv6 goes to this controller, ipv4 goes to that controller."
     184Rspec - opt-in. How do we express what users experimenters want to allow in?
     185"All", "first 10", "only port 80 on switch 3".
     187How do we do this between slivers?
     189Use case: "gibev me our planetlab nodes and the E-GENI network that connects
     190them." Need to know how to communicate that off of this switch, off of this
     191node, is a point of attachment.
     193(Aaron Falk) if you've got multiple slivers on a single planetlab node, how do you
     194assign them to an egeni node? what does planetlab demultiplex on?
     196(Larry Peterson) tcp ports. we've been lazy in how you lock down ports, you claim a port on
     197a wiki.
     199(Aaron Falk) There is a bootstrapping problem wth planetlab and E-GENI. We need to
     200figure this out.
     202(Chip) do you have both openflow and planetlab nodes in the same room?
     204I do not.
     206(Ivan Seskar), (Nick Feamster) have both planetlab nodes and openflow
     207switches, but they are not connected
     209(Larry Peterson) we could have a global allocation of ports, tunnel numbers,
     210etc., if we just have a global list.
     212(Guido) we want a dynamic mapping to slices
     214(Ted Faber) If yo're going to define slices in the rspec, have to use globally
     215understood parts of the flowspec, internally to the aggregate switches may
     216need to be topoology aware
     21810:30 Robert Ricci:  Where We Are
     220Working prototype rspec
     222Supports nodes, interfaces, links.
     224Used to allocate slivers -- raw PCs, vms, vlans, tunnels. Expressed in XML.
     225Tunnels are cross-aggregate. Slice Emnbedding Servive that understands it.
     227Under development: extensions using NVDL, cross-aggregate RSpecs.
     229In our view of the lifecycle of an rspec, we view it as progressive
     230annotation. User creates ''request'' (bound or unbound), passes to a Slice
     231Embedding Service, annotates with physical resources selected, maybe more
     232than one.
     234Gives to CM, CM signs (generates ticket), '''Manifest''' returned by CM, adds
     235details like access method, MACs, etc.
     237Four types, similar but not identical.
     239Advertisement, catalog (published by component manager)
     241Request, constructed by user (purchase order)
     243Ticket, receipt (signed, type of credential)
     245Manifest, packing slip (returned by !CreateSliver())
     247Model we have now is that an individual component manager will accept or
     248reject your request. This needs to be expanded, if it can only handle some
     249of your request (e.g. 99 out of 100 requests).
     251We could make it more complicated, not sure what the right thing to do is.
     253Discussion of how to do what is essentially the travel agent problem.
     255[John Duerig]
     257Looking at the rspec as a mapping between the requested sliver and the
     258physical resources.
     260(Aaron Falk) What does nick need to do with the bgpmux to use this?
     262We're always adding information, never removing information advertisements
     263have component IDs, requests have virtual IDs, a ''bound'' request has both,
     264creating a mapping. Identifiers are URNs (GMOC proposal)
     266A sliver uniquely identiied by (slice ID, virtual ID, CM ID)
     268(Aaron Falk) If what I'm advertising is a collection of stuff, what do I advertise?
     270If you don't want to show me the details of your network, it's is not our
     271design center.
     273(Aaron Falk) but I2 won't run an AM, won't identify all the optical switches
     274along the path
     276we'll advertise "here's an enet switch, here's another enet switch", and
     277won't say anything about the topology beneath it, since it's dynamic and out
     278of our control.
     280If you care whether you go across shared trunk links, etc., you can ask for
     281that. The slice embedding service can do this to minimize cross-trunk
     282latency, etc. To connect  to Rob's talk, if openflow gives me an identifier
     283that we need to pass back, it goes into the Manifest, any virtual identifier
     284is ''my'' identifier (well, it has to be a URN).
     286Coordination problems: both ends may need to share information, e.g. tunnel
     287endpoints. Ordering/timing may be important. Negotiation may be necessary
     288(e.g. session key establishment). Some are transitive problems (e.g. VLAN
     289#s, unless translation is possible), Assumption is that cross-aggregate
     291(Nick Feamster) VINI has rspec to create tunnels between virtual nodes, but
     292need one to connect VINI to mux, nether VINI or !ProtoGeni
     294(Nick Feamster) is there one rspec that's going to say "I need a virtual node
     295that is a tunnel to this mux"?
     299this is all typed, types are well-known device classes (e.g. openflow
     300enabled ethernet)
     302this grows out of stuff we do for emulab. We have a node type "!PlanetLab",
     303its links are type "ipv4".
     305(Guido) You're assuming these connections are always layer 2? What if it's
     306something else?
     308(Ted Faber) type might be "runs a routing protocol"
     310We describe at the lowest level - e.g. ethernet, not ipv4, or tcp. You need
     311to know you can run ipv4 over ethernet.
     313Links can cross aggregate boundaries -- nodes may not.
     315(Ivan Seskar) Said "node cannot cross aggregate" -- but this is common in wireless,
     316e.g. wifi and wimax.
     318(Aaron Falk) Ah, the ''node'' will be in one aggregate, will have two different
     319kinds of links out (via different carriers, etc.)
     321Disconnect -- some people think in terms of nodes, some think in terms of
     324Coordination across aggregates: design space
     326a. client negoitates with each CM, rspec is the medium.
     328b. cms coordinate among themselves, using a ''new'' standardized control plane
     329API. Rspec ''could'' be the medium.
     331c. Untrusted intermediaty negotiates for client, intermediate has no privs
     332that client doesn't have. Rspec ''could'' be the medium.
     334d. Trusted intermediary negotiates for client, pre-established trust between
     335intermediary and CMs. Rspec ''could'' be the medium.
     337(Aaron Falk) Would Nick's dynamic tunnel server be an example of (b)?
     341Doing (a) and (b), going for hybrid of (b) and (d). plan, not done yet. CMs
     342negotiate two-party arrangements directly, e.g. tunnels. Trusted intermediary
     343negotiates multi-party (VLANs, trusted authority picks VLAN numbenr, client is
     344oblivious, only CMs talk to intermediary, negotiation information held by CM.
     346(Aaron Falk) is this consistent with DRAGON approach?
     348(John Tracy) yes, generally -- I've got info in my slides.
     354Larry Peterson: Resource Specifications and End-to-End Slices
     356This is kind of high level.
     358I'm going to argue we have a bunch of nodes. It might be the case that some of
     359these nodes are special -- e.g., underneath them they have a layer 2 technology they
     360want to take advantage of.
     362Some of these nodes are going to be part of other aggregates that have special
     363capabilities, e.g. !OpenFlow aggregate nodes, VINI aggregate nodes.
     365(Christopher Small GPO) so a node is a member of an aggregate for each kind of
     366network it is on?
     368A node is controlled by only one aggregate at a time.
     370My definition of a node is something I can dump code into. "Clouds" export an
     371aggregate manager interface. I can say "set up a circuit between node A and
     372node B".
     374VINI is a cloud of nodes. Enterprise GENI is a cloud.
     376The reason I want to look at the world this way, is that a bunch of nodes
     377already have a functioning interconnect, the internet. The assumption that I
     378need across two aggregate boundaries is that I have shared some dmux key
     379across the boundary, so there's a global allocation of these keys.
     381The world is a ''whole'' lot simpler if everyone is reachable via a shared id
     382space -- it can be ip addrs or something else.
     384We already assumed that everything was reachable via some mechanism in the
     385control plane. I think we should do the same thing for the data plane to make
     386this all work more easily.
     388(Aaron Falk) You're assuming that there is one of these things between each pair -- what
     389if you have to go across three links?
     391It's complicating life a lot ( for the researcher ) to have to deal with all
     392pairwise layer 2 possibilities. Give me some guarantees about latency,
     393failure, bandwidth, independent of what layers of encapsulation I do, is the
     394key to what the researcher wants to do.
     396I'm not removing the capability of working with different kinds of links, just
     397abstracting it away.
     399(Ted Faber) Can you view this without having IP tunneling?
     401I view connecting via something lower than IP that it's enough more difficult
     402that it's an optimization. I can run GRE tunneling over layer 2 as easily as
     403at IP.
     405I'm questioning the value to the research community to connect at layer 2.
     406Jennifer (who is interested in IP networks) is happy with this. 
     408We have built VINI and we have built !PlanetLab. Nobody's coming to VINI to use
     409layer 2 circuits. They use VINI for the guarantees, not layer 2-level hacking.
     411(Rob Ricci) I have a theory that people aren't using VINI for this is because they're
     412using emulab. A significant minority do experiments ''below'' IP. Playing around
     413with ethernet bridging, alternatives to IP, ... Still a minority, but we have
     416There are a couple of reasons people aren't using VINI in large numbers, but
     417there are an awful lot who are most interested in predictable link behavior.
     419(Ivan Seskar) Most of the orbit experimenters don't care about IP at all. But that's the
     422That's a good point. I'd view ORBIT as another cloud (another aggregate).
     424(Ted Faber) If you try to tunnel a MAC over IP, well, it doesn't work very well. Once
     425you go up to layer 3, you've disrupted layer 2 sufficiently that you can't
     426necessarily run the experiments you want to run.
     428As a consequence of this, there may be aggregate-specific experiments.
     429(Strong implication that this is not the common case.)
     431Two separable issues: interface negotiation (what kind of resources are
     432available between aggregates), resource negotiation (which resources can I
     435Have WSDL version of the interface (program-heavy), also XSD version (data heavy)
     437Backing off of pushing massive nested rspec on you.
     439Adopted simple model:
     441        RSpec = !GetResources()
     443returns list of all resources available
     445        !SetResources(RSpec)
     447acquire all resources
     449Only way today is
     451     while (true)
     452       if !SetResources(RSpec)
     453         break
     455Doesn't neessarily terminate.
     457Aggregate returns capacity (what it will say yes to in XSD) and policy (how to
     458interpret the capacity in XSLT). P(Request, Capacity) -> True means request
     459will be honored. P(Request, Capacity) -> False means request will not be
     463VINI today
     464     P(R, C) -> true if R and C are the same graph
     466VINI tomorrow
     467     P(R, C) -> true if R is a subset of C
     469!PlanetLab today
     470          P(R, C) -> true if R is a subset of C and site sliver count OK
     472(Nick Feamster) Is there a notion of time in an rspec?
     476Discussion of using pyton vs XSLT for this.
     478(Aaron Falk) We're off track, you've gotten us off into rspec reconcilliation.
     486Ilya: experimenting with ontologies for multi-layer network slicing
     488Need a way to describe what we have (substrate), what we want (request), what
     489we are given (slice spec). Need to map resources, configure resources, and
     490know what to measure.
     492Problem is that we have many organically grown solutions that kind of
     493work. Need a functional model utilizing formalized techniques that fully
     494describe the context of an experiment.
     496Multi-layered networks, not a single graph, embedding of graphs of
     497higher-level networks into graphs of lower-level networks.
     499we aren't the first to face this: netowkr marjup language working group
     500(NML-WG). Participants include I2, ESnet (!PerfSONAR model) Dante/ GN2.
     501University Amsterdam (NDL)
     503NDL. Based on OWL/RDF, in use within GLIF. Can be used for RDF frameworks.
     504SPARQL supported. Based on G.805 (Generic function arch of transport
     507Needs to be computer-readable network description. Human-readable is good, but
     508computer-readable is critical. Describe state of multi-layer network.
     510What else do we need? Ability to describe requests (fuzzy), ability to
     511describe specifications (precise).
     513Looked at some other options, this one seemed like the best option. It's a
     514large search space.
     516NDL-OWL extends NDL into OWL. Richer semantics. BEN RDF describes BEN
     517substrate. Developed a number of modules to assist in using it.
     519Have forked from original University of Amsterdam NDL; OWL has evolved, wanted
     520to use better technology, better tools.
     522Goals -- more description languages, meaurement, cloud, wireless, etc.
     524(Ted Faber) My concern is that it seems very detailed. More detail than we need?
     526(Ivan Seskar) Example: give me a linear topology of nodes
     528(Aaron Falk) Assume there are tools that translate high-level descr into this.
     530I don't think people will point and click their way to this.
     532(Chip) Glyph could be federated into GENI and vice versa
     534We're working on it.
     540MAX/DRAGON view, Chris Tracy
     542SOAP-based GENI aggregate manager.
     544End-to-end slices
     546Over last few months have build aggregate manager in Java, runs in Tomcat as
     547an apache service, uses WSDL (web services API).
     549(Larry Peterson) We have a SOAP interface now, too, should be able to
     552On the back side talks to DRAGON-specific controller via SOAP. Or can go to a
     553!PlanetLab controller.
     555(Chip) is !OpenFlow currently using same or different SOAP interface?
     557(Larry Peterson) It's a subset, we need to have the discussion and get them in sync
     559We've mostly tried to stick with what was in the slice facility architecture
     560document. Been thinking of standing up a clearinghouse, but haven't done it
     563We're using this to control any component at MAX, planetlab nodes, DRAGON,
     564Equcalyptus, PASTA wireless, !NetFPGA-based !OpenFlow switches.
     566Putting !NetFPGA cards in a machine, putting them out on the net somewhere.
     568We want this aggregate manager to be able ot manage anything on the net.
     570(Aaron Falk) This aggregate manager box isn't just a bunch of functions, doing some work
     571to make sure things are allocated in a coherent manner
     573You can go to this AM and run "list capabilities" or "get nodes" and pass in a
     574filter spec (give me all the nodes that can do both dragon and planetlab).
     575Returns a controller URL so you can go talk to the controller for more info.
     577Code is published on the website, instances will be site-specific (aggregate
     580Wrote WSDL file by hand based on SFA. wsdl2java generated Java skeleton code.
     582( demo
     583using a generic SOAP Client)
     585(Chip) Nick is this the AM you should be using?
     587(Rob Ricci) in our case we haven't described our interface as a WSDL
     589(Rob Sherwood) there are a lot of WSDL tools you can use
     591The code for this (svn repo) is pointed to in the slides (p. 11?)
     593We think the clearing house will handle ticketing. (Open issue of which things
     594are in the AM, which are in the CH.)
     596We believe end-to-end slices will look like what we're already doing for
     597interdomain circuit reservations for DRAGON, Internet2 DCN, ESnet, etc. We
     598think it will look like our Path Computation Engine (PCE), but will be more
     599like a Resource Computation Engine.
     601We use NM-WG control plane schema. Domains, nodes, ports, links.
     603Domain -- group of nodes.
     605Nodes: end systems, switches.
     607Ports: on each node.
     609Links: this is where we describe the switching capabilities of a link (VLAN
     610ranges, etc)
     612It's point to point only -- not broadcast. No support for multipoint VLANs.
     614<< switched slide decks >>
     616Assumption that at a domain boundary we only support VLANs. Restricted to
     617layer 2, not cross-layer allocation.
     619(Chip) the GPO architecture would have a central clearinghouse, messages going up
     620and down; in this the messages go across.
     622(OGF26 presentation NDL working group, Multilayer NDL presentation by Freek
     623Dijkstra -- great explanation of NDL.)
     625This is GMPLS inspired, done with signaling via web services.
     627(Yufeng Xin) Who issues cross boundary configurations?
     629Once there's agreement over which VLAN we're terminating on, each aggregate
     630will do it.
     632(Chip) how baked is this? Used 10 times a day?
     634Hundreds of times a month. Solid. Pretty much always works.
     641What will each cluster be doing to reach the goal of cross-aggregate slices by
     642the end of spiral 1? What are the inter-project dependencies?
     644(Larry Peterson) you're forcing us into realtime project negotiation here. I
     645think we ought to do as much as we can assuming IP as the interconnect. It
     646works for some, maybe not all -- !GpENI? SPP boxes?
     648(John Turner) users can log into each and allocate by hand. No more explicit
     649coordination is required to make it work.
     651What's needed to get there from here?
     653(James Sterbenz) From our perspective stitching together with IP works for
     654now, but long term for GENI to succeed need to support more.
     656Is doing this by hand workable? Does this work for everyone?
     658(Chris Tracy) We can provision nodes via DRAGON
     660(John Turner) For both !GpENI and DRAGON we can terminate a connection that we
     661have VLANs defined on. There is a non-trivial amount of control software that
     662we need to write but we have other things to do first, like getting systems
     665Goal is constructing end-to-end slices, not having it done automagically.
     667Guido, it sounds like you've got a little more work to do to connect the
     668stanford campus.
     670(Guido) I think IP is a good common denominator for connecting aggregates for
     671now. If we want to scale this to hundreds of aggregates this won't work.
     673Proposal to Cluster B: draw a picture of this, show where things interconnect
     674and at what layer, where there are tunnels, where there are lower layer
     677(John Turner) Did a version of this for GEC3.
     679Yes, but we need this for the cluster. Nobody has put onto a single sheet of
     680paper all the things that need to be done to do this.
     682(John Turner) We all connect to the outside world via Internet2.
     684We have our own wave on I2. There are not routers on it. Want to draw a
     685distinction between our access-to-the-outside-world and the GENI backbone.
     687Goal is to demonstrate end-to-end slices across a range of technologies.
     689(Chris Tracy) Are you in DC yet?
     691(Robert Ricci) There will be within a small number of weeks.
     693Action: Chris Tracy will get into the Internet2 cage, and will pull a cable
     694between DRAGON and SPP (?).
     696Internet2 has told us we may be able to get access to get people from DCN to
     697the GENI wave.
     699(Rob Sherwood) What are the right interfaces? We are in LA, Houston, and NY,
     700are adding DC.
     702(Chris Tracy) How is !GpENI going to connect to Internet2 DCN or the GENI wave?
     704(James Sterbenz) to maxgigapop, our equipment is in the internet2 pop in
     705Kansas City.
     707(Chip) How much will this cost?
     709(James Sterbenz) Will take action item to find out how much this will cost.
     711(Robert Ricci) You need to determine this quickly, we have a switch going in
     712in the next couple of weeks. We need to talk.
     714Action: Rob, James, Heidi coordinate on Kansas City Internet2 connection
     716Action: Aaron will send email to Rob and James to make sure they can contact
     717each other via email.
     719MAX connects to NLR.
     721Action: Guido, John Turner will write a one-page high level list of the
     722actions one needs to take to configure a slice.
     724(Robert Ricci) We're already doing this, more or less. It's done with
     725tunnels. Once we get set up in Internet2 POPs. Plan outlined on his last slide
     726shows how to get VLANs from campus to campus.
     728(Robert Ricci) Kentucky, CMU are already in. UML PEN shouldn't be too hard.
     730The picture will be very helpful
     732(Robert Ricci) It was on my poster at the last GEC.
     734Action: Robert Ricci will pull together the picture.
     736Cluster D, the impression that I've got is that you're all pretty integrated
     737with a common control framework. Do you understand how to connect UMass down
     738to BEN?
     740(Ilya) Technically we know some of the problems are. GPO committed us to being
     741an NLR based cluster. UMass is working to get sone VLANs, but they are a
     742limited resource.  We are trying to figure out how to get to Charlotte
     743(Internet2 terminates there) from RTP.  Maybe MPLS or VLAN from RENCI BEN POP
     744to Intenet2.
     746Kansei is not an Internet2 campus.
     748It's important to make sure that we don't overwhelm Internet2 with requests,
     749go through the GPO (Heidi).
     751Let's get this picture so we can figure out where the gaps are.
     753(Ilya) My main problem is that there will be costs associated with connecting
     754us to Internet2. We don't know how much.
     756(Harry) Does it make sense to draw a picture of BEN, NLR, and MAX?
     758(Ilya) I'd like to do this, let's talk about this.
     760Action: Harry and Ilya will talk about this.
     762Cluster E?
     764(Ivan Seskar) Hey, we're done. We're on the same campus. Except for the air
     765gap; we need to get someone to pull a cable up six floors from where Internet2
     766terminates and where we are.
     768Cluster A?
     770(Ted Faber) We're trying to do some relevant end-to-end work. Hook up to the
     771DCN, plugged into a DETERLab node on one end, ISI East on the other. Working
     772on the expanded authorization work we have talked about at the last couple of