Changes between Initial Version and Version 1 of NetworkRspecMiniWorkshopNotes


Ignore:
Timestamp:
06/25/09 16:41:29 (10 years ago)
Author:
Christopher Small
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • NetworkRspecMiniWorkshopNotes

    v1 v1  
     1See http://groups.geni.net/geni/wiki/NetworkRSpecMiniWorkshop for sides
     2
     39:15 intro slides (Aaron)
     4
     5High level motivation goals for spiral 1. See Aaron's slides. Demonstrate
     6end-to-end slices across representative samples of the major substrates and
     7technologies envisioned in GENI. Goal for each cluster is to demonstrate
     8end-to-end via your control framework.
     9
     10This is what the GPO is paying you to do, this is what we want demonstrated
     11at the end of spiral 1
     12
     13(John Turner) What is an "end"?
     14
     15Loosely defined, but think from perspective of experimenter.
     16
     17(Larry Peterson) Key bar is two or more aggregates sharing a packet?
     18
     19No, aggregates, not in terms of apckets
     20
     21(James Sterbenz) Not single slice across ''all'' of them in year 1, pairwise is
     22sufficient
     23
     24Only pairwise would be a dissapointment. Really want to show that it's
     25possible to use multiple aggregates, more than two. Minimal is two end nodes
     26and two aggregates, but that's really the absolute minimum.
     27
     28For each cluster,
     29
     30- How does a network device or aggregate reserve resources?
     31
     32- How to network slivers join to form an end-to-end slice?
     33
     34Nobody has articulated yet how they are going to do this.
     35
     36In each cluster, what are the plans to support this, in each cluster, in
     37spiral 1?
     38
     39(John Turner) When you talk about an end-to-end slice, do you have a prescription for
     40what the data path looks like?
     41
     42You're in cluster B, a candidate slice would include internet2 vlans
     43connecting spp nodes and !GpENI and Stanford.
     44
     45(John Turner) Sounds like internet2 vlans on an spp somehow ending up at stanford
     46
     47GENI will always have a hodge-podge of connectivity. For now we'll have a set
     48of preconfigured fixed vlans in internet 2.
     49
     50(John Turner) VLANS in I2 are being provided by Rob Ricci, not I2. I don't
     51want to pick on Stanford per se, but it's not clear how we're going to make
     52any connection happen.
     53
     54(James Sterbenz) Also what sort of experiments will tie all of these things
     55together, given the disparate technologies
     56
     57(Larry Peterson) this is what will come out of discussions today
     58
     59(4) is raising two questions -- what capabilities will we have from backbones
     60in spiral 1
     61
     62(Camilo Viecco) are any clusters also using NLR?
     63
     64(Guido) there is potentially more than one I2 backbone
     65
     66(Rob Ricci) the ones we're handing out today are between individual sites. I2
     67is providing a 10Gb wavelength and we're putting ethernet switches on top of
     68that wave. We'll be making VLANs on that wave without any I2 involvement.
     69
     70(Ivan Seskar) also an issue when particular locations will be connected to the
     71wave
     72
     73There is a ''general'' problem here. Let's not get lost in the weeds.
     74
     75(Larry Peterson) Simple observation. As diverse as these technologies are, we
     76have IP addresses for all of them. We can fall back to using IP (tunnels) for
     77everything.
     78
     79End users can access GENI experimentation this way. But we have set as a goal
     80non-IP, not layered over IP connectivity for GENI.
     81
     82(Rob Ricci) Maybe we should do with !!GpENI -- get a fiber from the local I2 POP
     83to our campus.
     84
     85(John Turner) From my experience the I2 folks will push back not make this easy
     86
     87Yes, we've got some things to work out here. However, to be concrete, those
     88who have direct connectivity in sprial 1 will be expected to demonstrate the
     89ability to stitch together VLANs.
     90
     91(Ivan Seskar) Can GPO be more involved with these discussions with I2 / NLR?
     92
     93Yes, we have full time staff who would be happy to help work things out with
     94I2, NLR, etc.
     95
     96(Ilya) can we organize and get some shared stimulus money to wire up campuses?
     97
     98(Chip) My understanding is that every campus is supposed to make a single
     99proposal to NSF.
     100
     1019:35 Network Cofiguration use case slides (Aaron)
     102
     103Sliver creation. First makes reservations of stuff around the edge, but now
     104needs to interconnect aggregates. (Assumption is that physical connectivity
     105between these aggregates exists.) Then researcher passes rspec requesting
     106VLANs between aggregates, then asks for the topology to be set up.
     107
     108- do we need a standard method to describe these network coordinates, or are
     109  they just blobs?
     110
     111- does it go into the rspec?
     112
     113- are there now constraints on the order in which networks can be added to a
     114  slice?
     115
     116- how does it work with multiple networks in a series?
     117
     118- how are ordering costraints handled in the control framework?
     119
     120- how will tunnels work?
     121
     122This is the discussion for this afternoon. How do we describe resources is
     123this morning.
     124
     125(John Turner) let's put this in as concrete terms as possible, I have a very difficult
     126time connecting your abstract diagrams with my cluster or any other cluster.
     127
     128We need to figure out what people need to do to support this by the fall.
     129This is an engineering, not research, discussion. Different answers from
     130different clusters is OK, different answers from single cluster smells
     131funny; you need to explain how it'll work. Throwing code over the transom
     132probably won't cut it. Needs to be collective cluster ownership of this
     133goal. Entire cluster is going to be evaluated on getting this to work.
     134
     1359:45 Enterprise GENI view of the world, (Rob Sherwood)
     136
     137!OpenFlow overview
     138
     139!FlowVisor mosly feature complete, publicly released.
     140
     141Aggregate manager: resoruce discovery, reports to CH as rspec, accepts
     142reservations, converts rspec to flowvisor config.
     143
     144Clearinghouse: implemented toy version for testing.
     145
     146E-GENI rspec: switches, interfaces, "flowspace", opt-in, inter-aggregate
     147connectivity
     148
     149(Chip) have you thought about measurement yet?
     150
     151Built into openflow -- byte and packet counters. With a controller you can
     152redirect flows through a measurement box.
     153
     154(Guido) We haven't thought it through in full detail, but you get a fair amount
     155of control from !OpenFlow, can look deep into a packet.
     156
     157We don't really have nodes in a traditional sense, have a datapath ID (i.e.
     158MAC addr off switch), list of interfaces. We don't "log into" a switch.
     159
     160(Guido) as soon as you reserve a switch, the switch connects back to the URL of
     161your controller and the switch starts asking your controller for
     162instructions.
     163
     164(Camilo Viecco) Do you have one user at a time, or multiple users?
     165
     166You have one user at a time. Default rule is if we don't have a rule for a
     167packet, a message gets sent to the controller.
     168
     169(Guido) If you connect to something that is not part of your aggregate, it's
     170represented differently. This describes internal references.
     171
     172Can think of !FlowSpace as header, "field=value" pairs plus actions. Packet
     173classifier built-in. Header fields (ip_src, ip_dst, ethertype, etc.).
     174Actions are allow, deny, listen-only.
     175
     176Ex: all web traffic, except to main server:
     177
     178    ip_src = 1.2.3.4 tcp_dport=80 :: DENY
     179
     180    ip_src=1.2.3/24 tcp_dport=80 :: ALLOW
     181
     182(Guido) can say "ipv6 goes to this controller, ipv4 goes to that controller."
     183
     184Rspec - opt-in. How do we express what users experimenters want to allow in?
     185"All", "first 10", "only port 80 on switch 3".
     186
     187How do we do this between slivers?
     188
     189Use case: "gibev me our planetlab nodes and the E-GENI network that connects
     190them." Need to know how to communicate that off of this switch, off of this
     191node, is a point of attachment.
     192
     193(Aaron Falk) if you've got multiple slivers on a single planetlab node, how do you
     194assign them to an egeni node? what does planetlab demultiplex on?
     195
     196(Larry Peterson) tcp ports. we've been lazy in how you lock down ports, you claim a port on
     197a wiki.
     198
     199(Aaron Falk) There is a bootstrapping problem wth planetlab and E-GENI. We need to
     200figure this out.
     201
     202(Chip) do you have both openflow and planetlab nodes in the same room?
     203
     204I do not.
     205
     206(Ivan Seskar), (Nick Feamster) have both planetlab nodes and openflow
     207switches, but they are not connected
     208
     209(Larry Peterson) we could have a global allocation of ports, tunnel numbers,
     210etc., if we just have a global list.
     211
     212(Guido) we want a dynamic mapping to slices
     213
     214(Ted Faber) If yo're going to define slices in the rspec, have to use globally
     215understood parts of the flowspec, internally to the aggregate switches may
     216need to be topoology aware
     217
     21810:30 Robert Ricci:  Where We Are
     219
     220Working prototype rspec
     221
     222Supports nodes, interfaces, links.
     223
     224Used to allocate slivers -- raw PCs, vms, vlans, tunnels. Expressed in XML.
     225Tunnels are cross-aggregate. Slice Emnbedding Servive that understands it.
     226
     227Under development: extensions using NVDL, cross-aggregate RSpecs.
     228
     229In our view of the lifecycle of an rspec, we view it as progressive
     230annotation. User creates ''request'' (bound or unbound), passes to a Slice
     231Embedding Service, annotates with physical resources selected, maybe more
     232than one.
     233
     234Gives to CM, CM signs (generates ticket), '''Manifest''' returned by CM, adds
     235details like access method, MACs, etc.
     236
     237Four types, similar but not identical.
     238
     239Advertisement, catalog (published by component manager)
     240
     241Request, constructed by user (purchase order)
     242
     243Ticket, receipt (signed, type of credential)
     244
     245Manifest, packing slip (returned by !CreateSliver())
     246
     247Model we have now is that an individual component manager will accept or
     248reject your request. This needs to be expanded, if it can only handle some
     249of your request (e.g. 99 out of 100 requests).
     250
     251We could make it more complicated, not sure what the right thing to do is.
     252
     253Discussion of how to do what is essentially the travel agent problem.
     254
     255[John Duerig]
     256
     257Looking at the rspec as a mapping between the requested sliver and the
     258physical resources.
     259
     260(Aaron Falk) What does nick need to do with the bgpmux to use this?
     261
     262We're always adding information, never removing information advertisements
     263have component IDs, requests have virtual IDs, a ''bound'' request has both,
     264creating a mapping. Identifiers are URNs (GMOC proposal)
     265
     266A sliver uniquely identiied by (slice ID, virtual ID, CM ID)
     267
     268(Aaron Falk) If what I'm advertising is a collection of stuff, what do I advertise?
     269
     270If you don't want to show me the details of your network, it's is not our
     271design center.
     272
     273(Aaron Falk) but I2 won't run an AM, won't identify all the optical switches
     274along the path
     275
     276we'll advertise "here's an enet switch, here's another enet switch", and
     277won't say anything about the topology beneath it, since it's dynamic and out
     278of our control.
     279
     280If you care whether you go across shared trunk links, etc., you can ask for
     281that. The slice embedding service can do this to minimize cross-trunk
     282latency, etc. To connect  to Rob's talk, if openflow gives me an identifier
     283that we need to pass back, it goes into the Manifest, any virtual identifier
     284is ''my'' identifier (well, it has to be a URN).
     285
     286Coordination problems: both ends may need to share information, e.g. tunnel
     287endpoints. Ordering/timing may be important. Negotiation may be necessary
     288(e.g. session key establishment). Some are transitive problems (e.g. VLAN
     289#s, unless translation is possible), Assumption is that cross-aggregate
     290
     291(Nick Feamster) VINI has rspec to create tunnels between virtual nodes, but
     292need one to connect VINI to mux, nether VINI or !ProtoGeni
     293
     294(Nick Feamster) is there one rspec that's going to say "I need a virtual node
     295that is a tunnel to this mux"?
     296
     297(laughter)
     298
     299this is all typed, types are well-known device classes (e.g. openflow
     300enabled ethernet)
     301
     302this grows out of stuff we do for emulab. We have a node type "!PlanetLab",
     303its links are type "ipv4".
     304
     305(Guido) You're assuming these connections are always layer 2? What if it's
     306something else?
     307
     308(Ted Faber) type might be "runs a routing protocol"
     309
     310We describe at the lowest level - e.g. ethernet, not ipv4, or tcp. You need
     311to know you can run ipv4 over ethernet.
     312
     313Links can cross aggregate boundaries -- nodes may not.
     314
     315(Ivan Seskar) Said "node cannot cross aggregate" -- but this is common in wireless,
     316e.g. wifi and wimax.
     317
     318(Aaron Falk) Ah, the ''node'' will be in one aggregate, will have two different
     319kinds of links out (via different carriers, etc.)
     320
     321Disconnect -- some people think in terms of nodes, some think in terms of
     322links.
     323
     324Coordination across aggregates: design space
     325
     326a. client negoitates with each CM, rspec is the medium.
     327
     328b. cms coordinate among themselves, using a ''new'' standardized control plane
     329API. Rspec ''could'' be the medium.
     330
     331c. Untrusted intermediaty negotiates for client, intermediate has no privs
     332that client doesn't have. Rspec ''could'' be the medium.
     333
     334d. Trusted intermediary negotiates for client, pre-established trust between
     335intermediary and CMs. Rspec ''could'' be the medium.
     336
     337(Aaron Falk) Would Nick's dynamic tunnel server be an example of (b)?
     338
     339Yes
     340
     341Doing (a) and (b), going for hybrid of (b) and (d). plan, not done yet. CMs
     342negotiate two-party arrangements directly, e.g. tunnels. Trusted intermediary
     343negotiates multi-party (VLANs, trusted authority picks VLAN numbenr, client is
     344oblivious, only CMs talk to intermediary, negotiation information held by CM.
     345
     346(Aaron Falk) is this consistent with DRAGON approach?
     347
     348(John Tracy) yes, generally -- I've got info in my slides.
     349
     350----------------------------------------------------------------
     351break
     352----------------------------------------------------------------
     353
     354Larry Peterson: Resource Specifications and End-to-End Slices
     355
     356This is kind of high level.
     357
     358I'm going to argue we have a bunch of nodes. It might be the case that some of
     359these nodes are special -- e.g., underneath them they have a layer 2 technology they
     360want to take advantage of.
     361
     362Some of these nodes are going to be part of other aggregates that have special
     363capabilities, e.g. !OpenFlow aggregate nodes, VINI aggregate nodes.
     364
     365(Christopher Small GPO) so a node is a member of an aggregate for each kind of
     366network it is on?
     367
     368A node is controlled by only one aggregate at a time.
     369
     370My definition of a node is something I can dump code into. "Clouds" export an
     371aggregate manager interface. I can say "set up a circuit between node A and
     372node B".
     373
     374VINI is a cloud of nodes. Enterprise GENI is a cloud.
     375
     376The reason I want to look at the world this way, is that a bunch of nodes
     377already have a functioning interconnect, the internet. The assumption that I
     378need across two aggregate boundaries is that I have shared some dmux key
     379across the boundary, so there's a global allocation of these keys.
     380
     381The world is a ''whole'' lot simpler if everyone is reachable via a shared id
     382space -- it can be ip addrs or something else.
     383
     384We already assumed that everything was reachable via some mechanism in the
     385control plane. I think we should do the same thing for the data plane to make
     386this all work more easily.
     387
     388(Aaron Falk) You're assuming that there is one of these things between each pair -- what
     389if you have to go across three links?
     390
     391It's complicating life a lot ( for the researcher ) to have to deal with all
     392pairwise layer 2 possibilities. Give me some guarantees about latency,
     393failure, bandwidth, independent of what layers of encapsulation I do, is the
     394key to what the researcher wants to do.
     395
     396I'm not removing the capability of working with different kinds of links, just
     397abstracting it away.
     398
     399(Ted Faber) Can you view this without having IP tunneling?
     400
     401I view connecting via something lower than IP that it's enough more difficult
     402that it's an optimization. I can run GRE tunneling over layer 2 as easily as
     403at IP.
     404
     405I'm questioning the value to the research community to connect at layer 2.
     406Jennifer (who is interested in IP networks) is happy with this. 
     407
     408We have built VINI and we have built !PlanetLab. Nobody's coming to VINI to use
     409layer 2 circuits. They use VINI for the guarantees, not layer 2-level hacking.
     410
     411(Rob Ricci) I have a theory that people aren't using VINI for this is because they're
     412using emulab. A significant minority do experiments ''below'' IP. Playing around
     413with ethernet bridging, alternatives to IP, ... Still a minority, but we have
     414them.
     415
     416There are a couple of reasons people aren't using VINI in large numbers, but
     417there are an awful lot who are most interested in predictable link behavior.
     418
     419(Ivan Seskar) Most of the orbit experimenters don't care about IP at all. But that's the
     420edge.
     421
     422That's a good point. I'd view ORBIT as another cloud (another aggregate).
     423
     424(Ted Faber) If you try to tunnel a MAC over IP, well, it doesn't work very well. Once
     425you go up to layer 3, you've disrupted layer 2 sufficiently that you can't
     426necessarily run the experiments you want to run.
     427
     428As a consequence of this, there may be aggregate-specific experiments.
     429(Strong implication that this is not the common case.)
     430
     431Two separable issues: interface negotiation (what kind of resources are
     432available between aggregates), resource negotiation (which resources can I
     433have)
     434
     435Have WSDL version of the interface (program-heavy), also XSD version (data heavy)
     436
     437Backing off of pushing massive nested rspec on you.
     438
     439Adopted simple model:
     440
     441        RSpec = !GetResources()
     442
     443returns list of all resources available
     444
     445        !SetResources(RSpec)
     446
     447acquire all resources
     448
     449Only way today is
     450
     451     while (true)
     452       if !SetResources(RSpec)
     453         break
     454
     455Doesn't neessarily terminate.
     456
     457Aggregate returns capacity (what it will say yes to in XSD) and policy (how to
     458interpret the capacity in XSLT). P(Request, Capacity) -> True means request
     459will be honored. P(Request, Capacity) -> False means request will not be
     460honored.
     461Examples:
     462
     463VINI today
     464     P(R, C) -> true if R and C are the same graph
     465
     466VINI tomorrow
     467     P(R, C) -> true if R is a subset of C
     468
     469!PlanetLab today
     470          P(R, C) -> true if R is a subset of C and site sliver count OK
     471
     472(Nick Feamster) Is there a notion of time in an rspec?
     473
     474Yes
     475
     476Discussion of using pyton vs XSLT for this.
     477
     478(Aaron Falk) We're off track, you've gotten us off into rspec reconcilliation.
     479
     480----------------------------------------------------------------
     481
     482Lunch
     483
     484----------------------------------------------------------------
     485
     486Ilya: experimenting with ontologies for multi-layer network slicing
     487
     488Need a way to describe what we have (substrate), what we want (request), what
     489we are given (slice spec). Need to map resources, configure resources, and
     490know what to measure.
     491
     492Problem is that we have many organically grown solutions that kind of
     493work. Need a functional model utilizing formalized techniques that fully
     494describe the context of an experiment.
     495
     496Multi-layered networks, not a single graph, embedding of graphs of
     497higher-level networks into graphs of lower-level networks.
     498
     499we aren't the first to face this: netowkr marjup language working group
     500(NML-WG). Participants include I2, ESnet (!PerfSONAR model) Dante/ GN2.
     501University Amsterdam (NDL)
     502
     503NDL. Based on OWL/RDF, in use within GLIF. Can be used for RDF frameworks.
     504SPARQL supported. Based on G.805 (Generic function arch of transport
     505networks).
     506
     507Needs to be computer-readable network description. Human-readable is good, but
     508computer-readable is critical. Describe state of multi-layer network.
     509
     510What else do we need? Ability to describe requests (fuzzy), ability to
     511describe specifications (precise).
     512
     513Looked at some other options, this one seemed like the best option. It's a
     514large search space.
     515
     516NDL-OWL extends NDL into OWL. Richer semantics. BEN RDF describes BEN
     517substrate. Developed a number of modules to assist in using it.
     518
     519Have forked from original University of Amsterdam NDL; OWL has evolved, wanted
     520to use better technology, better tools.
     521
     522Goals -- more description languages, meaurement, cloud, wireless, etc.
     523
     524(Ted Faber) My concern is that it seems very detailed. More detail than we need?
     525
     526(Ivan Seskar) Example: give me a linear topology of nodes
     527
     528(Aaron Falk) Assume there are tools that translate high-level descr into this.
     529
     530I don't think people will point and click their way to this.
     531
     532(Chip) Glyph could be federated into GENI and vice versa
     533
     534We're working on it.
     535
     536----------------------------------------------------------------
     5371:15pm
     538----------------------------------------------------------------
     539
     540MAX/DRAGON view, Chris Tracy
     541
     542SOAP-based GENI aggregate manager.
     543
     544End-to-end slices
     545
     546Over last few months have build aggregate manager in Java, runs in Tomcat as
     547an apache service, uses WSDL (web services API).
     548
     549(Larry Peterson) We have a SOAP interface now, too, should be able to
     550interoperate.
     551
     552On the back side talks to DRAGON-specific controller via SOAP. Or can go to a
     553!PlanetLab controller.
     554
     555(Chip) is !OpenFlow currently using same or different SOAP interface?
     556
     557(Larry Peterson) It's a subset, we need to have the discussion and get them in sync
     558
     559We've mostly tried to stick with what was in the slice facility architecture
     560document. Been thinking of standing up a clearinghouse, but haven't done it
     561yet.
     562
     563We're using this to control any component at MAX, planetlab nodes, DRAGON,
     564Equcalyptus, PASTA wireless, !NetFPGA-based !OpenFlow switches.
     565
     566Putting !NetFPGA cards in a machine, putting them out on the net somewhere.
     567
     568We want this aggregate manager to be able ot manage anything on the net.
     569
     570(Aaron Falk) This aggregate manager box isn't just a bunch of functions, doing some work
     571to make sure things are allocated in a coherent manner
     572
     573You can go to this AM and run "list capabilities" or "get nodes" and pass in a
     574filter spec (give me all the nodes that can do both dragon and planetlab).
     575Returns a controller URL so you can go talk to the controller for more info.
     576
     577Code is published on the website, instances will be site-specific (aggregate
     578specific).
     579
     580Wrote WSDL file by hand based on SFA. wsdl2java generated Java skeleton code.
     581
     582(http://geni.dragon.maxgigapop.net:8080/axis2/services/AggregateGENI?wsdl demo
     583using a generic SOAP Client)
     584
     585(Chip) Nick is this the AM you should be using?
     586
     587(Rob Ricci) in our case we haven't described our interface as a WSDL
     588
     589(Rob Sherwood) there are a lot of WSDL tools you can use
     590
     591The code for this (svn repo) is pointed to in the slides (p. 11?)
     592
     593We think the clearing house will handle ticketing. (Open issue of which things
     594are in the AM, which are in the CH.)
     595
     596We believe end-to-end slices will look like what we're already doing for
     597interdomain circuit reservations for DRAGON, Internet2 DCN, ESnet, etc. We
     598think it will look like our Path Computation Engine (PCE), but will be more
     599like a Resource Computation Engine.
     600
     601We use NM-WG control plane schema. Domains, nodes, ports, links.
     602
     603Domain -- group of nodes.
     604
     605Nodes: end systems, switches.
     606
     607Ports: on each node.
     608
     609Links: this is where we describe the switching capabilities of a link (VLAN
     610ranges, etc)
     611
     612It's point to point only -- not broadcast. No support for multipoint VLANs.
     613
     614<< switched slide decks >>
     615
     616Assumption that at a domain boundary we only support VLANs. Restricted to
     617layer 2, not cross-layer allocation.
     618
     619(Chip) the GPO architecture would have a central clearinghouse, messages going up
     620and down; in this the messages go across.
     621
     622(OGF26 presentation NDL working group, Multilayer NDL presentation by Freek
     623Dijkstra -- great explanation of NDL.)
     624
     625This is GMPLS inspired, done with signaling via web services.
     626
     627(Yufeng Xin) Who issues cross boundary configurations?
     628
     629Once there's agreement over which VLAN we're terminating on, each aggregate
     630will do it.
     631
     632(Chip) how baked is this? Used 10 times a day?
     633
     634Hundreds of times a month. Solid. Pretty much always works.
     635
     636----------------------------------------------------------------
     637Break
     638----------------------------------------------------------------
     639Aaron
     640
     641What will each cluster be doing to reach the goal of cross-aggregate slices by
     642the end of spiral 1? What are the inter-project dependencies?
     643
     644(Larry Peterson) you're forcing us into realtime project negotiation here. I
     645think we ought to do as much as we can assuming IP as the interconnect. It
     646works for some, maybe not all -- !GpENI? SPP boxes?
     647
     648(John Turner) users can log into each and allocate by hand. No more explicit
     649coordination is required to make it work.
     650
     651What's needed to get there from here?
     652
     653(James Sterbenz) From our perspective stitching together with IP works for
     654now, but long term for GENI to succeed need to support more.
     655
     656Is doing this by hand workable? Does this work for everyone?
     657
     658(Chris Tracy) We can provision nodes via DRAGON
     659
     660(John Turner) For both !GpENI and DRAGON we can terminate a connection that we
     661have VLANs defined on. There is a non-trivial amount of control software that
     662we need to write but we have other things to do first, like getting systems
     663deployed.
     664
     665Goal is constructing end-to-end slices, not having it done automagically.
     666
     667Guido, it sounds like you've got a little more work to do to connect the
     668stanford campus.
     669
     670(Guido) I think IP is a good common denominator for connecting aggregates for
     671now. If we want to scale this to hundreds of aggregates this won't work.
     672
     673Proposal to Cluster B: draw a picture of this, show where things interconnect
     674and at what layer, where there are tunnels, where there are lower layer
     675connections.
     676
     677(John Turner) Did a version of this for GEC3.
     678
     679Yes, but we need this for the cluster. Nobody has put onto a single sheet of
     680paper all the things that need to be done to do this.
     681
     682(John Turner) We all connect to the outside world via Internet2.
     683
     684We have our own wave on I2. There are not routers on it. Want to draw a
     685distinction between our access-to-the-outside-world and the GENI backbone.
     686
     687Goal is to demonstrate end-to-end slices across a range of technologies.
     688
     689(Chris Tracy) Are you in DC yet?
     690
     691(Robert Ricci) There will be within a small number of weeks.
     692
     693Action: Chris Tracy will get into the Internet2 cage, and will pull a cable
     694between DRAGON and SPP (?).
     695
     696Internet2 has told us we may be able to get access to get people from DCN to
     697the GENI wave.
     698
     699(Rob Sherwood) What are the right interfaces? We are in LA, Houston, and NY,
     700are adding DC.
     701
     702(Chris Tracy) How is !GpENI going to connect to Internet2 DCN or the GENI wave?
     703
     704(James Sterbenz) to maxgigapop, our equipment is in the internet2 pop in
     705Kansas City.
     706
     707(Chip) How much will this cost?
     708
     709(James Sterbenz) Will take action item to find out how much this will cost.
     710
     711(Robert Ricci) You need to determine this quickly, we have a switch going in
     712in the next couple of weeks. We need to talk.
     713
     714Action: Rob, James, Heidi coordinate on Kansas City Internet2 connection
     715
     716Action: Aaron will send email to Rob and James to make sure they can contact
     717each other via email.
     718
     719MAX connects to NLR.
     720
     721Action: Guido, John Turner will write a one-page high level list of the
     722actions one needs to take to configure a slice.
     723
     724(Robert Ricci) We're already doing this, more or less. It's done with
     725tunnels. Once we get set up in Internet2 POPs. Plan outlined on his last slide
     726shows how to get VLANs from campus to campus.
     727
     728(Robert Ricci) Kentucky, CMU are already in. UML PEN shouldn't be too hard.
     729
     730The picture will be very helpful
     731
     732(Robert Ricci) It was on my poster at the last GEC.
     733
     734Action: Robert Ricci will pull together the picture.
     735
     736Cluster D, the impression that I've got is that you're all pretty integrated
     737with a common control framework. Do you understand how to connect UMass down
     738to BEN?
     739
     740(Ilya) Technically we know some of the problems are. GPO committed us to being
     741an NLR based cluster. UMass is working to get sone VLANs, but they are a
     742limited resource.  We are trying to figure out how to get to Charlotte
     743(Internet2 terminates there) from RTP.  Maybe MPLS or VLAN from RENCI BEN POP
     744to Intenet2.
     745
     746Kansei is not an Internet2 campus.
     747
     748It's important to make sure that we don't overwhelm Internet2 with requests,
     749go through the GPO (Heidi).
     750
     751Let's get this picture so we can figure out where the gaps are.
     752
     753(Ilya) My main problem is that there will be costs associated with connecting
     754us to Internet2. We don't know how much.
     755
     756(Harry) Does it make sense to draw a picture of BEN, NLR, and MAX?
     757
     758(Ilya) I'd like to do this, let's talk about this.
     759
     760Action: Harry and Ilya will talk about this.
     761
     762Cluster E?
     763
     764(Ivan Seskar) Hey, we're done. We're on the same campus. Except for the air
     765gap; we need to get someone to pull a cable up six floors from where Internet2
     766terminates and where we are.
     767
     768Cluster A?
     769
     770(Ted Faber) We're trying to do some relevant end-to-end work. Hook up to the
     771DCN, plugged into a DETERLab node on one end, ISI East on the other. Working
     772on the expanded authorization work we have talked about at the last couple of
     773GECs.