| 1 | See http://groups.geni.net/geni/wiki/NetworkRSpecMiniWorkshop for sides |
| 2 | |
| 3 | 9:15 intro slides (Aaron) |
| 4 | |
| 5 | High level motivation goals for spiral 1. See Aaron's slides. Demonstrate |
| 6 | end-to-end slices across representative samples of the major substrates and |
| 7 | technologies envisioned in GENI. Goal for each cluster is to demonstrate |
| 8 | end-to-end via your control framework. |
| 9 | |
| 10 | This is what the GPO is paying you to do, this is what we want demonstrated |
| 11 | at the end of spiral 1 |
| 12 | |
| 13 | (John Turner) What is an "end"? |
| 14 | |
| 15 | Loosely defined, but think from perspective of experimenter. |
| 16 | |
| 17 | (Larry Peterson) Key bar is two or more aggregates sharing a packet? |
| 18 | |
| 19 | No, aggregates, not in terms of apckets |
| 20 | |
| 21 | (James Sterbenz) Not single slice across ''all'' of them in year 1, pairwise is |
| 22 | sufficient |
| 23 | |
| 24 | Only pairwise would be a dissapointment. Really want to show that it's |
| 25 | possible to use multiple aggregates, more than two. Minimal is two end nodes |
| 26 | and two aggregates, but that's really the absolute minimum. |
| 27 | |
| 28 | For each cluster, |
| 29 | |
| 30 | - How does a network device or aggregate reserve resources? |
| 31 | |
| 32 | - How to network slivers join to form an end-to-end slice? |
| 33 | |
| 34 | Nobody has articulated yet how they are going to do this. |
| 35 | |
| 36 | In each cluster, what are the plans to support this, in each cluster, in |
| 37 | spiral 1? |
| 38 | |
| 39 | (John Turner) When you talk about an end-to-end slice, do you have a prescription for |
| 40 | what the data path looks like? |
| 41 | |
| 42 | You're in cluster B, a candidate slice would include internet2 vlans |
| 43 | connecting spp nodes and !GpENI and Stanford. |
| 44 | |
| 45 | (John Turner) Sounds like internet2 vlans on an spp somehow ending up at stanford |
| 46 | |
| 47 | GENI will always have a hodge-podge of connectivity. For now we'll have a set |
| 48 | of preconfigured fixed vlans in internet 2. |
| 49 | |
| 50 | (John Turner) VLANS in I2 are being provided by Rob Ricci, not I2. I don't |
| 51 | want to pick on Stanford per se, but it's not clear how we're going to make |
| 52 | any connection happen. |
| 53 | |
| 54 | (James Sterbenz) Also what sort of experiments will tie all of these things |
| 55 | together, given the disparate technologies |
| 56 | |
| 57 | (Larry Peterson) this is what will come out of discussions today |
| 58 | |
| 59 | (4) is raising two questions -- what capabilities will we have from backbones |
| 60 | in spiral 1 |
| 61 | |
| 62 | (Camilo Viecco) are any clusters also using NLR? |
| 63 | |
| 64 | (Guido) there is potentially more than one I2 backbone |
| 65 | |
| 66 | (Rob Ricci) the ones we're handing out today are between individual sites. I2 |
| 67 | is providing a 10Gb wavelength and we're putting ethernet switches on top of |
| 68 | that wave. We'll be making VLANs on that wave without any I2 involvement. |
| 69 | |
| 70 | (Ivan Seskar) also an issue when particular locations will be connected to the |
| 71 | wave |
| 72 | |
| 73 | There is a ''general'' problem here. Let's not get lost in the weeds. |
| 74 | |
| 75 | (Larry Peterson) Simple observation. As diverse as these technologies are, we |
| 76 | have IP addresses for all of them. We can fall back to using IP (tunnels) for |
| 77 | everything. |
| 78 | |
| 79 | End users can access GENI experimentation this way. But we have set as a goal |
| 80 | non-IP, not layered over IP connectivity for GENI. |
| 81 | |
| 82 | (Rob Ricci) Maybe we should do with !!GpENI -- get a fiber from the local I2 POP |
| 83 | to our campus. |
| 84 | |
| 85 | (John Turner) From my experience the I2 folks will push back not make this easy |
| 86 | |
| 87 | Yes, we've got some things to work out here. However, to be concrete, those |
| 88 | who have direct connectivity in sprial 1 will be expected to demonstrate the |
| 89 | ability to stitch together VLANs. |
| 90 | |
| 91 | (Ivan Seskar) Can GPO be more involved with these discussions with I2 / NLR? |
| 92 | |
| 93 | Yes, we have full time staff who would be happy to help work things out with |
| 94 | I2, NLR, etc. |
| 95 | |
| 96 | (Ilya) can we organize and get some shared stimulus money to wire up campuses? |
| 97 | |
| 98 | (Chip) My understanding is that every campus is supposed to make a single |
| 99 | proposal to NSF. |
| 100 | |
| 101 | 9:35 Network Cofiguration use case slides (Aaron) |
| 102 | |
| 103 | Sliver creation. First makes reservations of stuff around the edge, but now |
| 104 | needs to interconnect aggregates. (Assumption is that physical connectivity |
| 105 | between these aggregates exists.) Then researcher passes rspec requesting |
| 106 | VLANs between aggregates, then asks for the topology to be set up. |
| 107 | |
| 108 | - do we need a standard method to describe these network coordinates, or are |
| 109 | they just blobs? |
| 110 | |
| 111 | - does it go into the rspec? |
| 112 | |
| 113 | - are there now constraints on the order in which networks can be added to a |
| 114 | slice? |
| 115 | |
| 116 | - how does it work with multiple networks in a series? |
| 117 | |
| 118 | - how are ordering costraints handled in the control framework? |
| 119 | |
| 120 | - how will tunnels work? |
| 121 | |
| 122 | This is the discussion for this afternoon. How do we describe resources is |
| 123 | this morning. |
| 124 | |
| 125 | (John Turner) let's put this in as concrete terms as possible, I have a very difficult |
| 126 | time connecting your abstract diagrams with my cluster or any other cluster. |
| 127 | |
| 128 | We need to figure out what people need to do to support this by the fall. |
| 129 | This is an engineering, not research, discussion. Different answers from |
| 130 | different clusters is OK, different answers from single cluster smells |
| 131 | funny; you need to explain how it'll work. Throwing code over the transom |
| 132 | probably won't cut it. Needs to be collective cluster ownership of this |
| 133 | goal. Entire cluster is going to be evaluated on getting this to work. |
| 134 | |
| 135 | 9:45 Enterprise GENI view of the world, (Rob Sherwood) |
| 136 | |
| 137 | !OpenFlow overview |
| 138 | |
| 139 | !FlowVisor mosly feature complete, publicly released. |
| 140 | |
| 141 | Aggregate manager: resoruce discovery, reports to CH as rspec, accepts |
| 142 | reservations, converts rspec to flowvisor config. |
| 143 | |
| 144 | Clearinghouse: implemented toy version for testing. |
| 145 | |
| 146 | E-GENI rspec: switches, interfaces, "flowspace", opt-in, inter-aggregate |
| 147 | connectivity |
| 148 | |
| 149 | (Chip) have you thought about measurement yet? |
| 150 | |
| 151 | Built into openflow -- byte and packet counters. With a controller you can |
| 152 | redirect flows through a measurement box. |
| 153 | |
| 154 | (Guido) We haven't thought it through in full detail, but you get a fair amount |
| 155 | of control from !OpenFlow, can look deep into a packet. |
| 156 | |
| 157 | We don't really have nodes in a traditional sense, have a datapath ID (i.e. |
| 158 | MAC addr off switch), list of interfaces. We don't "log into" a switch. |
| 159 | |
| 160 | (Guido) as soon as you reserve a switch, the switch connects back to the URL of |
| 161 | your controller and the switch starts asking your controller for |
| 162 | instructions. |
| 163 | |
| 164 | (Camilo Viecco) Do you have one user at a time, or multiple users? |
| 165 | |
| 166 | You have one user at a time. Default rule is if we don't have a rule for a |
| 167 | packet, a message gets sent to the controller. |
| 168 | |
| 169 | (Guido) If you connect to something that is not part of your aggregate, it's |
| 170 | represented differently. This describes internal references. |
| 171 | |
| 172 | Can think of !FlowSpace as header, "field=value" pairs plus actions. Packet |
| 173 | classifier built-in. Header fields (ip_src, ip_dst, ethertype, etc.). |
| 174 | Actions are allow, deny, listen-only. |
| 175 | |
| 176 | Ex: all web traffic, except to main server: |
| 177 | |
| 178 | ip_src = 1.2.3.4 tcp_dport=80 :: DENY |
| 179 | |
| 180 | ip_src=1.2.3/24 tcp_dport=80 :: ALLOW |
| 181 | |
| 182 | (Guido) can say "ipv6 goes to this controller, ipv4 goes to that controller." |
| 183 | |
| 184 | Rspec - opt-in. How do we express what users experimenters want to allow in? |
| 185 | "All", "first 10", "only port 80 on switch 3". |
| 186 | |
| 187 | How do we do this between slivers? |
| 188 | |
| 189 | Use case: "gibev me our planetlab nodes and the E-GENI network that connects |
| 190 | them." Need to know how to communicate that off of this switch, off of this |
| 191 | node, is a point of attachment. |
| 192 | |
| 193 | (Aaron Falk) if you've got multiple slivers on a single planetlab node, how do you |
| 194 | assign them to an egeni node? what does planetlab demultiplex on? |
| 195 | |
| 196 | (Larry Peterson) tcp ports. we've been lazy in how you lock down ports, you claim a port on |
| 197 | a wiki. |
| 198 | |
| 199 | (Aaron Falk) There is a bootstrapping problem wth planetlab and E-GENI. We need to |
| 200 | figure this out. |
| 201 | |
| 202 | (Chip) do you have both openflow and planetlab nodes in the same room? |
| 203 | |
| 204 | I do not. |
| 205 | |
| 206 | (Ivan Seskar), (Nick Feamster) have both planetlab nodes and openflow |
| 207 | switches, but they are not connected |
| 208 | |
| 209 | (Larry Peterson) we could have a global allocation of ports, tunnel numbers, |
| 210 | etc., if we just have a global list. |
| 211 | |
| 212 | (Guido) we want a dynamic mapping to slices |
| 213 | |
| 214 | (Ted Faber) If yo're going to define slices in the rspec, have to use globally |
| 215 | understood parts of the flowspec, internally to the aggregate switches may |
| 216 | need to be topoology aware |
| 217 | |
| 218 | 10:30 Robert Ricci: Where We Are |
| 219 | |
| 220 | Working prototype rspec |
| 221 | |
| 222 | Supports nodes, interfaces, links. |
| 223 | |
| 224 | Used to allocate slivers -- raw PCs, vms, vlans, tunnels. Expressed in XML. |
| 225 | Tunnels are cross-aggregate. Slice Emnbedding Servive that understands it. |
| 226 | |
| 227 | Under development: extensions using NVDL, cross-aggregate RSpecs. |
| 228 | |
| 229 | In our view of the lifecycle of an rspec, we view it as progressive |
| 230 | annotation. User creates ''request'' (bound or unbound), passes to a Slice |
| 231 | Embedding Service, annotates with physical resources selected, maybe more |
| 232 | than one. |
| 233 | |
| 234 | Gives to CM, CM signs (generates ticket), '''Manifest''' returned by CM, adds |
| 235 | details like access method, MACs, etc. |
| 236 | |
| 237 | Four types, similar but not identical. |
| 238 | |
| 239 | Advertisement, catalog (published by component manager) |
| 240 | |
| 241 | Request, constructed by user (purchase order) |
| 242 | |
| 243 | Ticket, receipt (signed, type of credential) |
| 244 | |
| 245 | Manifest, packing slip (returned by !CreateSliver()) |
| 246 | |
| 247 | Model we have now is that an individual component manager will accept or |
| 248 | reject your request. This needs to be expanded, if it can only handle some |
| 249 | of your request (e.g. 99 out of 100 requests). |
| 250 | |
| 251 | We could make it more complicated, not sure what the right thing to do is. |
| 252 | |
| 253 | Discussion of how to do what is essentially the travel agent problem. |
| 254 | |
| 255 | [John Duerig] |
| 256 | |
| 257 | Looking at the rspec as a mapping between the requested sliver and the |
| 258 | physical resources. |
| 259 | |
| 260 | (Aaron Falk) What does nick need to do with the bgpmux to use this? |
| 261 | |
| 262 | We're always adding information, never removing information advertisements |
| 263 | have component IDs, requests have virtual IDs, a ''bound'' request has both, |
| 264 | creating a mapping. Identifiers are URNs (GMOC proposal) |
| 265 | |
| 266 | A sliver uniquely identiied by (slice ID, virtual ID, CM ID) |
| 267 | |
| 268 | (Aaron Falk) If what I'm advertising is a collection of stuff, what do I advertise? |
| 269 | |
| 270 | If you don't want to show me the details of your network, it's is not our |
| 271 | design center. |
| 272 | |
| 273 | (Aaron Falk) but I2 won't run an AM, won't identify all the optical switches |
| 274 | along the path |
| 275 | |
| 276 | we'll advertise "here's an enet switch, here's another enet switch", and |
| 277 | won't say anything about the topology beneath it, since it's dynamic and out |
| 278 | of our control. |
| 279 | |
| 280 | If you care whether you go across shared trunk links, etc., you can ask for |
| 281 | that. The slice embedding service can do this to minimize cross-trunk |
| 282 | latency, etc. To connect to Rob's talk, if openflow gives me an identifier |
| 283 | that we need to pass back, it goes into the Manifest, any virtual identifier |
| 284 | is ''my'' identifier (well, it has to be a URN). |
| 285 | |
| 286 | Coordination problems: both ends may need to share information, e.g. tunnel |
| 287 | endpoints. Ordering/timing may be important. Negotiation may be necessary |
| 288 | (e.g. session key establishment). Some are transitive problems (e.g. VLAN |
| 289 | #s, unless translation is possible), Assumption is that cross-aggregate |
| 290 | |
| 291 | (Nick Feamster) VINI has rspec to create tunnels between virtual nodes, but |
| 292 | need one to connect VINI to mux, nether VINI or !ProtoGeni |
| 293 | |
| 294 | (Nick Feamster) is there one rspec that's going to say "I need a virtual node |
| 295 | that is a tunnel to this mux"? |
| 296 | |
| 297 | (laughter) |
| 298 | |
| 299 | this is all typed, types are well-known device classes (e.g. openflow |
| 300 | enabled ethernet) |
| 301 | |
| 302 | this grows out of stuff we do for emulab. We have a node type "!PlanetLab", |
| 303 | its links are type "ipv4". |
| 304 | |
| 305 | (Guido) You're assuming these connections are always layer 2? What if it's |
| 306 | something else? |
| 307 | |
| 308 | (Ted Faber) type might be "runs a routing protocol" |
| 309 | |
| 310 | We describe at the lowest level - e.g. ethernet, not ipv4, or tcp. You need |
| 311 | to know you can run ipv4 over ethernet. |
| 312 | |
| 313 | Links can cross aggregate boundaries -- nodes may not. |
| 314 | |
| 315 | (Ivan Seskar) Said "node cannot cross aggregate" -- but this is common in wireless, |
| 316 | e.g. wifi and wimax. |
| 317 | |
| 318 | (Aaron Falk) Ah, the ''node'' will be in one aggregate, will have two different |
| 319 | kinds of links out (via different carriers, etc.) |
| 320 | |
| 321 | Disconnect -- some people think in terms of nodes, some think in terms of |
| 322 | links. |
| 323 | |
| 324 | Coordination across aggregates: design space |
| 325 | |
| 326 | a. client negoitates with each CM, rspec is the medium. |
| 327 | |
| 328 | b. cms coordinate among themselves, using a ''new'' standardized control plane |
| 329 | API. Rspec ''could'' be the medium. |
| 330 | |
| 331 | c. Untrusted intermediaty negotiates for client, intermediate has no privs |
| 332 | that client doesn't have. Rspec ''could'' be the medium. |
| 333 | |
| 334 | d. Trusted intermediary negotiates for client, pre-established trust between |
| 335 | intermediary and CMs. Rspec ''could'' be the medium. |
| 336 | |
| 337 | (Aaron Falk) Would Nick's dynamic tunnel server be an example of (b)? |
| 338 | |
| 339 | Yes |
| 340 | |
| 341 | Doing (a) and (b), going for hybrid of (b) and (d). plan, not done yet. CMs |
| 342 | negotiate two-party arrangements directly, e.g. tunnels. Trusted intermediary |
| 343 | negotiates multi-party (VLANs, trusted authority picks VLAN numbenr, client is |
| 344 | oblivious, only CMs talk to intermediary, negotiation information held by CM. |
| 345 | |
| 346 | (Aaron Falk) is this consistent with DRAGON approach? |
| 347 | |
| 348 | (John Tracy) yes, generally -- I've got info in my slides. |
| 349 | |
| 350 | ---------------------------------------------------------------- |
| 351 | break |
| 352 | ---------------------------------------------------------------- |
| 353 | |
| 354 | Larry Peterson: Resource Specifications and End-to-End Slices |
| 355 | |
| 356 | This is kind of high level. |
| 357 | |
| 358 | I'm going to argue we have a bunch of nodes. It might be the case that some of |
| 359 | these nodes are special -- e.g., underneath them they have a layer 2 technology they |
| 360 | want to take advantage of. |
| 361 | |
| 362 | Some of these nodes are going to be part of other aggregates that have special |
| 363 | capabilities, e.g. !OpenFlow aggregate nodes, VINI aggregate nodes. |
| 364 | |
| 365 | (Christopher Small GPO) so a node is a member of an aggregate for each kind of |
| 366 | network it is on? |
| 367 | |
| 368 | A node is controlled by only one aggregate at a time. |
| 369 | |
| 370 | My definition of a node is something I can dump code into. "Clouds" export an |
| 371 | aggregate manager interface. I can say "set up a circuit between node A and |
| 372 | node B". |
| 373 | |
| 374 | VINI is a cloud of nodes. Enterprise GENI is a cloud. |
| 375 | |
| 376 | The reason I want to look at the world this way, is that a bunch of nodes |
| 377 | already have a functioning interconnect, the internet. The assumption that I |
| 378 | need across two aggregate boundaries is that I have shared some dmux key |
| 379 | across the boundary, so there's a global allocation of these keys. |
| 380 | |
| 381 | The world is a ''whole'' lot simpler if everyone is reachable via a shared id |
| 382 | space -- it can be ip addrs or something else. |
| 383 | |
| 384 | We already assumed that everything was reachable via some mechanism in the |
| 385 | control plane. I think we should do the same thing for the data plane to make |
| 386 | this all work more easily. |
| 387 | |
| 388 | (Aaron Falk) You're assuming that there is one of these things between each pair -- what |
| 389 | if you have to go across three links? |
| 390 | |
| 391 | It's complicating life a lot ( for the researcher ) to have to deal with all |
| 392 | pairwise layer 2 possibilities. Give me some guarantees about latency, |
| 393 | failure, bandwidth, independent of what layers of encapsulation I do, is the |
| 394 | key to what the researcher wants to do. |
| 395 | |
| 396 | I'm not removing the capability of working with different kinds of links, just |
| 397 | abstracting it away. |
| 398 | |
| 399 | (Ted Faber) Can you view this without having IP tunneling? |
| 400 | |
| 401 | I view connecting via something lower than IP that it's enough more difficult |
| 402 | that it's an optimization. I can run GRE tunneling over layer 2 as easily as |
| 403 | at IP. |
| 404 | |
| 405 | I'm questioning the value to the research community to connect at layer 2. |
| 406 | Jennifer (who is interested in IP networks) is happy with this. |
| 407 | |
| 408 | We have built VINI and we have built !PlanetLab. Nobody's coming to VINI to use |
| 409 | layer 2 circuits. They use VINI for the guarantees, not layer 2-level hacking. |
| 410 | |
| 411 | (Rob Ricci) I have a theory that people aren't using VINI for this is because they're |
| 412 | using emulab. A significant minority do experiments ''below'' IP. Playing around |
| 413 | with ethernet bridging, alternatives to IP, ... Still a minority, but we have |
| 414 | them. |
| 415 | |
| 416 | There are a couple of reasons people aren't using VINI in large numbers, but |
| 417 | there are an awful lot who are most interested in predictable link behavior. |
| 418 | |
| 419 | (Ivan Seskar) Most of the orbit experimenters don't care about IP at all. But that's the |
| 420 | edge. |
| 421 | |
| 422 | That's a good point. I'd view ORBIT as another cloud (another aggregate). |
| 423 | |
| 424 | (Ted Faber) If you try to tunnel a MAC over IP, well, it doesn't work very well. Once |
| 425 | you go up to layer 3, you've disrupted layer 2 sufficiently that you can't |
| 426 | necessarily run the experiments you want to run. |
| 427 | |
| 428 | As a consequence of this, there may be aggregate-specific experiments. |
| 429 | (Strong implication that this is not the common case.) |
| 430 | |
| 431 | Two separable issues: interface negotiation (what kind of resources are |
| 432 | available between aggregates), resource negotiation (which resources can I |
| 433 | have) |
| 434 | |
| 435 | Have WSDL version of the interface (program-heavy), also XSD version (data heavy) |
| 436 | |
| 437 | Backing off of pushing massive nested rspec on you. |
| 438 | |
| 439 | Adopted simple model: |
| 440 | |
| 441 | RSpec = !GetResources() |
| 442 | |
| 443 | returns list of all resources available |
| 444 | |
| 445 | !SetResources(RSpec) |
| 446 | |
| 447 | acquire all resources |
| 448 | |
| 449 | Only way today is |
| 450 | |
| 451 | while (true) |
| 452 | if !SetResources(RSpec) |
| 453 | break |
| 454 | |
| 455 | Doesn't neessarily terminate. |
| 456 | |
| 457 | Aggregate returns capacity (what it will say yes to in XSD) and policy (how to |
| 458 | interpret the capacity in XSLT). P(Request, Capacity) -> True means request |
| 459 | will be honored. P(Request, Capacity) -> False means request will not be |
| 460 | honored. |
| 461 | Examples: |
| 462 | |
| 463 | VINI today |
| 464 | P(R, C) -> true if R and C are the same graph |
| 465 | |
| 466 | VINI tomorrow |
| 467 | P(R, C) -> true if R is a subset of C |
| 468 | |
| 469 | !PlanetLab today |
| 470 | P(R, C) -> true if R is a subset of C and site sliver count OK |
| 471 | |
| 472 | (Nick Feamster) Is there a notion of time in an rspec? |
| 473 | |
| 474 | Yes |
| 475 | |
| 476 | Discussion of using pyton vs XSLT for this. |
| 477 | |
| 478 | (Aaron Falk) We're off track, you've gotten us off into rspec reconcilliation. |
| 479 | |
| 480 | ---------------------------------------------------------------- |
| 481 | |
| 482 | Lunch |
| 483 | |
| 484 | ---------------------------------------------------------------- |
| 485 | |
| 486 | Ilya: experimenting with ontologies for multi-layer network slicing |
| 487 | |
| 488 | Need a way to describe what we have (substrate), what we want (request), what |
| 489 | we are given (slice spec). Need to map resources, configure resources, and |
| 490 | know what to measure. |
| 491 | |
| 492 | Problem is that we have many organically grown solutions that kind of |
| 493 | work. Need a functional model utilizing formalized techniques that fully |
| 494 | describe the context of an experiment. |
| 495 | |
| 496 | Multi-layered networks, not a single graph, embedding of graphs of |
| 497 | higher-level networks into graphs of lower-level networks. |
| 498 | |
| 499 | we aren't the first to face this: netowkr marjup language working group |
| 500 | (NML-WG). Participants include I2, ESnet (!PerfSONAR model) Dante/ GN2. |
| 501 | University Amsterdam (NDL) |
| 502 | |
| 503 | NDL. Based on OWL/RDF, in use within GLIF. Can be used for RDF frameworks. |
| 504 | SPARQL supported. Based on G.805 (Generic function arch of transport |
| 505 | networks). |
| 506 | |
| 507 | Needs to be computer-readable network description. Human-readable is good, but |
| 508 | computer-readable is critical. Describe state of multi-layer network. |
| 509 | |
| 510 | What else do we need? Ability to describe requests (fuzzy), ability to |
| 511 | describe specifications (precise). |
| 512 | |
| 513 | Looked at some other options, this one seemed like the best option. It's a |
| 514 | large search space. |
| 515 | |
| 516 | NDL-OWL extends NDL into OWL. Richer semantics. BEN RDF describes BEN |
| 517 | substrate. Developed a number of modules to assist in using it. |
| 518 | |
| 519 | Have forked from original University of Amsterdam NDL; OWL has evolved, wanted |
| 520 | to use better technology, better tools. |
| 521 | |
| 522 | Goals -- more description languages, meaurement, cloud, wireless, etc. |
| 523 | |
| 524 | (Ted Faber) My concern is that it seems very detailed. More detail than we need? |
| 525 | |
| 526 | (Ivan Seskar) Example: give me a linear topology of nodes |
| 527 | |
| 528 | (Aaron Falk) Assume there are tools that translate high-level descr into this. |
| 529 | |
| 530 | I don't think people will point and click their way to this. |
| 531 | |
| 532 | (Chip) Glyph could be federated into GENI and vice versa |
| 533 | |
| 534 | We're working on it. |
| 535 | |
| 536 | ---------------------------------------------------------------- |
| 537 | 1:15pm |
| 538 | ---------------------------------------------------------------- |
| 539 | |
| 540 | MAX/DRAGON view, Chris Tracy |
| 541 | |
| 542 | SOAP-based GENI aggregate manager. |
| 543 | |
| 544 | End-to-end slices |
| 545 | |
| 546 | Over last few months have build aggregate manager in Java, runs in Tomcat as |
| 547 | an apache service, uses WSDL (web services API). |
| 548 | |
| 549 | (Larry Peterson) We have a SOAP interface now, too, should be able to |
| 550 | interoperate. |
| 551 | |
| 552 | On the back side talks to DRAGON-specific controller via SOAP. Or can go to a |
| 553 | !PlanetLab controller. |
| 554 | |
| 555 | (Chip) is !OpenFlow currently using same or different SOAP interface? |
| 556 | |
| 557 | (Larry Peterson) It's a subset, we need to have the discussion and get them in sync |
| 558 | |
| 559 | We've mostly tried to stick with what was in the slice facility architecture |
| 560 | document. Been thinking of standing up a clearinghouse, but haven't done it |
| 561 | yet. |
| 562 | |
| 563 | We're using this to control any component at MAX, planetlab nodes, DRAGON, |
| 564 | Equcalyptus, PASTA wireless, !NetFPGA-based !OpenFlow switches. |
| 565 | |
| 566 | Putting !NetFPGA cards in a machine, putting them out on the net somewhere. |
| 567 | |
| 568 | We want this aggregate manager to be able ot manage anything on the net. |
| 569 | |
| 570 | (Aaron Falk) This aggregate manager box isn't just a bunch of functions, doing some work |
| 571 | to make sure things are allocated in a coherent manner |
| 572 | |
| 573 | You can go to this AM and run "list capabilities" or "get nodes" and pass in a |
| 574 | filter spec (give me all the nodes that can do both dragon and planetlab). |
| 575 | Returns a controller URL so you can go talk to the controller for more info. |
| 576 | |
| 577 | Code is published on the website, instances will be site-specific (aggregate |
| 578 | specific). |
| 579 | |
| 580 | Wrote WSDL file by hand based on SFA. wsdl2java generated Java skeleton code. |
| 581 | |
| 582 | (http://geni.dragon.maxgigapop.net:8080/axis2/services/AggregateGENI?wsdl demo |
| 583 | using a generic SOAP Client) |
| 584 | |
| 585 | (Chip) Nick is this the AM you should be using? |
| 586 | |
| 587 | (Rob Ricci) in our case we haven't described our interface as a WSDL |
| 588 | |
| 589 | (Rob Sherwood) there are a lot of WSDL tools you can use |
| 590 | |
| 591 | The code for this (svn repo) is pointed to in the slides (p. 11?) |
| 592 | |
| 593 | We think the clearing house will handle ticketing. (Open issue of which things |
| 594 | are in the AM, which are in the CH.) |
| 595 | |
| 596 | We believe end-to-end slices will look like what we're already doing for |
| 597 | interdomain circuit reservations for DRAGON, Internet2 DCN, ESnet, etc. We |
| 598 | think it will look like our Path Computation Engine (PCE), but will be more |
| 599 | like a Resource Computation Engine. |
| 600 | |
| 601 | We use NM-WG control plane schema. Domains, nodes, ports, links. |
| 602 | |
| 603 | Domain -- group of nodes. |
| 604 | |
| 605 | Nodes: end systems, switches. |
| 606 | |
| 607 | Ports: on each node. |
| 608 | |
| 609 | Links: this is where we describe the switching capabilities of a link (VLAN |
| 610 | ranges, etc) |
| 611 | |
| 612 | It's point to point only -- not broadcast. No support for multipoint VLANs. |
| 613 | |
| 614 | << switched slide decks >> |
| 615 | |
| 616 | Assumption that at a domain boundary we only support VLANs. Restricted to |
| 617 | layer 2, not cross-layer allocation. |
| 618 | |
| 619 | (Chip) the GPO architecture would have a central clearinghouse, messages going up |
| 620 | and down; in this the messages go across. |
| 621 | |
| 622 | (OGF26 presentation NDL working group, Multilayer NDL presentation by Freek |
| 623 | Dijkstra -- great explanation of NDL.) |
| 624 | |
| 625 | This is GMPLS inspired, done with signaling via web services. |
| 626 | |
| 627 | (Yufeng Xin) Who issues cross boundary configurations? |
| 628 | |
| 629 | Once there's agreement over which VLAN we're terminating on, each aggregate |
| 630 | will do it. |
| 631 | |
| 632 | (Chip) how baked is this? Used 10 times a day? |
| 633 | |
| 634 | Hundreds of times a month. Solid. Pretty much always works. |
| 635 | |
| 636 | ---------------------------------------------------------------- |
| 637 | Break |
| 638 | ---------------------------------------------------------------- |
| 639 | Aaron |
| 640 | |
| 641 | What will each cluster be doing to reach the goal of cross-aggregate slices by |
| 642 | the end of spiral 1? What are the inter-project dependencies? |
| 643 | |
| 644 | (Larry Peterson) you're forcing us into realtime project negotiation here. I |
| 645 | think we ought to do as much as we can assuming IP as the interconnect. It |
| 646 | works for some, maybe not all -- !GpENI? SPP boxes? |
| 647 | |
| 648 | (John Turner) users can log into each and allocate by hand. No more explicit |
| 649 | coordination is required to make it work. |
| 650 | |
| 651 | What's needed to get there from here? |
| 652 | |
| 653 | (James Sterbenz) From our perspective stitching together with IP works for |
| 654 | now, but long term for GENI to succeed need to support more. |
| 655 | |
| 656 | Is doing this by hand workable? Does this work for everyone? |
| 657 | |
| 658 | (Chris Tracy) We can provision nodes via DRAGON |
| 659 | |
| 660 | (John Turner) For both !GpENI and DRAGON we can terminate a connection that we |
| 661 | have VLANs defined on. There is a non-trivial amount of control software that |
| 662 | we need to write but we have other things to do first, like getting systems |
| 663 | deployed. |
| 664 | |
| 665 | Goal is constructing end-to-end slices, not having it done automagically. |
| 666 | |
| 667 | Guido, it sounds like you've got a little more work to do to connect the |
| 668 | stanford campus. |
| 669 | |
| 670 | (Guido) I think IP is a good common denominator for connecting aggregates for |
| 671 | now. If we want to scale this to hundreds of aggregates this won't work. |
| 672 | |
| 673 | Proposal to Cluster B: draw a picture of this, show where things interconnect |
| 674 | and at what layer, where there are tunnels, where there are lower layer |
| 675 | connections. |
| 676 | |
| 677 | (John Turner) Did a version of this for GEC3. |
| 678 | |
| 679 | Yes, but we need this for the cluster. Nobody has put onto a single sheet of |
| 680 | paper all the things that need to be done to do this. |
| 681 | |
| 682 | (John Turner) We all connect to the outside world via Internet2. |
| 683 | |
| 684 | We have our own wave on I2. There are not routers on it. Want to draw a |
| 685 | distinction between our access-to-the-outside-world and the GENI backbone. |
| 686 | |
| 687 | Goal is to demonstrate end-to-end slices across a range of technologies. |
| 688 | |
| 689 | (Chris Tracy) Are you in DC yet? |
| 690 | |
| 691 | (Robert Ricci) There will be within a small number of weeks. |
| 692 | |
| 693 | Action: Chris Tracy will get into the Internet2 cage, and will pull a cable |
| 694 | between DRAGON and SPP (?). |
| 695 | |
| 696 | Internet2 has told us we may be able to get access to get people from DCN to |
| 697 | the GENI wave. |
| 698 | |
| 699 | (Rob Sherwood) What are the right interfaces? We are in LA, Houston, and NY, |
| 700 | are adding DC. |
| 701 | |
| 702 | (Chris Tracy) How is !GpENI going to connect to Internet2 DCN or the GENI wave? |
| 703 | |
| 704 | (James Sterbenz) to maxgigapop, our equipment is in the internet2 pop in |
| 705 | Kansas City. |
| 706 | |
| 707 | (Chip) How much will this cost? |
| 708 | |
| 709 | (James Sterbenz) Will take action item to find out how much this will cost. |
| 710 | |
| 711 | (Robert Ricci) You need to determine this quickly, we have a switch going in |
| 712 | in the next couple of weeks. We need to talk. |
| 713 | |
| 714 | Action: Rob, James, Heidi coordinate on Kansas City Internet2 connection |
| 715 | |
| 716 | Action: Aaron will send email to Rob and James to make sure they can contact |
| 717 | each other via email. |
| 718 | |
| 719 | MAX connects to NLR. |
| 720 | |
| 721 | Action: Guido, John Turner will write a one-page high level list of the |
| 722 | actions one needs to take to configure a slice. |
| 723 | |
| 724 | (Robert Ricci) We're already doing this, more or less. It's done with |
| 725 | tunnels. Once we get set up in Internet2 POPs. Plan outlined on his last slide |
| 726 | shows how to get VLANs from campus to campus. |
| 727 | |
| 728 | (Robert Ricci) Kentucky, CMU are already in. UML PEN shouldn't be too hard. |
| 729 | |
| 730 | The picture will be very helpful |
| 731 | |
| 732 | (Robert Ricci) It was on my poster at the last GEC. |
| 733 | |
| 734 | Action: Robert Ricci will pull together the picture. |
| 735 | |
| 736 | Cluster D, the impression that I've got is that you're all pretty integrated |
| 737 | with a common control framework. Do you understand how to connect UMass down |
| 738 | to BEN? |
| 739 | |
| 740 | (Ilya) Technically we know some of the problems are. GPO committed us to being |
| 741 | an NLR based cluster. UMass is working to get sone VLANs, but they are a |
| 742 | limited resource. We are trying to figure out how to get to Charlotte |
| 743 | (Internet2 terminates there) from RTP. Maybe MPLS or VLAN from RENCI BEN POP |
| 744 | to Intenet2. |
| 745 | |
| 746 | Kansei is not an Internet2 campus. |
| 747 | |
| 748 | It's important to make sure that we don't overwhelm Internet2 with requests, |
| 749 | go through the GPO (Heidi). |
| 750 | |
| 751 | Let's get this picture so we can figure out where the gaps are. |
| 752 | |
| 753 | (Ilya) My main problem is that there will be costs associated with connecting |
| 754 | us to Internet2. We don't know how much. |
| 755 | |
| 756 | (Harry) Does it make sense to draw a picture of BEN, NLR, and MAX? |
| 757 | |
| 758 | (Ilya) I'd like to do this, let's talk about this. |
| 759 | |
| 760 | Action: Harry and Ilya will talk about this. |
| 761 | |
| 762 | Cluster E? |
| 763 | |
| 764 | (Ivan Seskar) Hey, we're done. We're on the same campus. Except for the air |
| 765 | gap; we need to get someone to pull a cable up six floors from where Internet2 |
| 766 | terminates and where we are. |
| 767 | |
| 768 | Cluster A? |
| 769 | |
| 770 | (Ted Faber) We're trying to do some relevant end-to-end work. Hook up to the |
| 771 | DCN, plugged into a DETERLab node on one end, ISI East on the other. Working |
| 772 | on the expanded authorization work we have talked about at the last couple of |
| 773 | GECs. |