| 1 | == !PlanetLab ''(Larry Peterson)'' == |
| 2 | |
| 3 | Main modues from SFA are Slice Manager Interface, Registry Interface, |
| 4 | O&M Interface. First two will be standardized, it's thought that the |
| 5 | third will be defined per-aggregate. Two interfaces, currently |
| 6 | xml-rpc, going to support WSDL/SOAP. |
| 7 | |
| 8 | Slice Facility Interface (sfi) calls into Slice Manager, |
| 9 | Registry. Behind Slice Manager will be some number of aggregates, |
| 10 | right now !PlanetLab is the only aggregate. Skeletal version for |
| 11 | anyone who has a !PlanetLab codebase aggregate. A interface is same as |
| 12 | Slice Manager interface. If you're running a MyPLC-based interface |
| 13 | their code will run in top of it. Collectively based on the |
| 14 | slice-facility architecture. |
| 15 | |
| 16 | Two versions: SFA-full and SFA-lite. SFA-lite is SFA-full minus |
| 17 | credentials, so you can build an aggregate that depends on the |
| 18 | front-end having authenticated the caller. |
| 19 | |
| 20 | Have SFI, Slice Manager, Registry, Aggregage Manager. WSDL interface, |
| 21 | and a lite version (from Enterprise GENI). |
| 22 | |
| 23 | Minimal ops to implement for Aggregate Manager interface for your |
| 24 | Aggregate -- !GetResources() returns RSpecs, !CreateSlice() allocates |
| 25 | slice, !DeleteSlice(), !ListSlices(). Plus a minimal RSpec; can be |
| 26 | aggregate specific (basically an XSD, basically what you need as an |
| 27 | argument to !CreateSlice()). |
| 28 | |
| 29 | ''John Hartman'' What's the deal on SFA lite? |
| 30 | |
| 31 | You trust the Slice Manager and Registry to do security. |
| 32 | |
| 33 | PLC API is supported by each component manager. Aggregate (PLC) has |
| 34 | back door into component, using a private interface. |
| 35 | |
| 36 | Ticket call and tickets are implemented; usually go though |
| 37 | !CreateSlice(), not !GetTicket(). |
| 38 | |
| 39 | ''Chip Elliot'' How do you create topologies (e.g. for GpENI, MAX)? |
| 40 | |
| 41 | Aggregates return a set of RSpecs, which are opaque to Slice |
| 42 | Manager. User picks from the set of RSpecs, Slice Manager passes them |
| 43 | back down. Larry's assumption is that the RSpecs will have the |
| 44 | information about how to create a topology. You'll go and edit the XML |
| 45 | file (either directly or via a tool) to put together the RSpec set |
| 46 | that you want. |
| 47 | |
| 48 | ''Guido Appenzeller'' !FlowSpecs will need some opt-in description of |
| 49 | how this aggregate connects to the internet at large. |
| 50 | |
| 51 | We don't have opt-in at the moment. |
| 52 | |
| 53 | The set { sfi, Slice Manager, Registry } is the clearing house, from |
| 54 | my perspective. It's the portal that the user sees. There is no |
| 55 | central record of slices -- you can talk to an aggregate directly, so |
| 56 | there is no single point where you can find a list of all allocated |
| 57 | slices. |
| 58 | |
| 59 | ''Aaron Falk'' Clearinghouse is supposed to have ther definitive |
| 60 | record of who did what when; it's the place where researchers and |
| 61 | resources meet. |
| 62 | |
| 63 | Keep in mind that aggregates can hand out resources independently, so |
| 64 | there ''is'' no definitive record of who did what (and who is doing |
| 65 | what). |
| 66 | |
| 67 | As a first approximation we are in the busines of debugging aggregates |
| 68 | and our code as aggregates plug in to the framework. |
| 69 | |
| 70 | VINI is an ongoing MRI (?) project. Has developed a new kernel, uses |
| 71 | the !PlanetLab control framework. !PlanetLab will eventually adopt the |
| 72 | VINI kernel. The VINI kernel lets people own their own IP stack, |
| 73 | available on !PlanetLab within the next three months. |
| 74 | |
| 75 | Outside the people in this room, only the Europeans (PlanetLab Europe) |
| 76 | are using the geniwrapper interface. |
| 77 | |
| 78 | I don't understand which resources are "GENI resources". There are |
| 79 | resources provided by aggregates, and agreements between users and |
| 80 | aggregates -- and possibly peering agreements between organizations |
| 81 | controlling resources, etc. But GENI doesn't own anything itself. |
| 82 | |
| 83 | ''Aaron Falk'' Looking at your milestones, they are marked as |
| 84 | complete, looks good. |
| 85 | |
| 86 | ''Chip Elliot'' As we review milestones they may look archaic because |
| 87 | we drew them up a year ago. |
| 88 | |
| 89 | ''Aaron Falk'' Want layer 2 connections available to researchers |
| 90 | ("Establish direct connectivity between project facilities and |
| 91 | Internet2"). |
| 92 | |
| 93 | ''Heidi Picher Dempsey'' Intent was you could specify end-to-end VLANs |
| 94 | on Internet2. |
| 95 | |
| 96 | We'll provide end-to-end IP, based on last week's meeting. |
| 97 | |
| 98 | ''Chip Elliot'' Can you stitch the dataplane so you can run non-IP |
| 99 | traffic? |
| 100 | |
| 101 | It may be a layer 2 connection, but it'll be tunneled over IP. |
| 102 | |
| 103 | ''Chip Elliot'' Two major goals for spiral 1: control framework and |
| 104 | cross-aggregate slices, and end-to-end layer 2 VLANs. |
| 105 | |
| 106 | ''Aaron Falk'' Another topic Heidi and Aaron talked about; desire to |
| 107 | keep control plane for GENI on GENI resources. Control plane will be |
| 108 | over IP; if control plane stops working, everything falls over. |
| 109 | |
| 110 | Access to control plane is via IP. Resources underneath, depends on |
| 111 | how you connect to it. |
| 112 | |
| 113 | ''Heidi Picher Dempsey'' We had hoped that all of the aggregates would |
| 114 | be one hop away from Internet2, and run everything over GENI. |
| 115 | |
| 116 | This strikes me as overambitious and possibly counterproductive |
| 117 | requirement. |
| 118 | |
| 119 | ''Guido Appenzeller'' We currently have less faith in our |
| 120 | infrastructure than in the internet. I'd rather run my control |
| 121 | traffic over the internet. |
| 122 | |
| 123 | ''Chip Elliot'' Do you envision Princeton being connected to Internet2 |
| 124 | and/or NLR at layer 2? |
| 125 | |
| 126 | It's out of our control, but it's possible. |
| 127 | |
| 128 | ''Chip Elliot'' GPO would prefer that everyone went direct. |
| 129 | |
| 130 | Rest of the milestones seems reasonable. |
| 131 | |
| 132 | == GUSH ''(Jeanne Albrecht)'' == |
| 133 | |
| 134 | Couple of students working this summer. Starting to work with GENI |
| 135 | wrapper, have worked with PLC interface for a while. Another student |
| 136 | working on a GUI. Teaching distributed systems in the fall, getting |
| 137 | students to use it. |
| 138 | |
| 139 | ''Aaron Falk'' Do you have users outside? |
| 140 | |
| 141 | Yes, and it's taking up a lot of time. GUSH is a giant C++ package at |
| 142 | the moment, takes some real work to build it, so we provide statically |
| 143 | compiled binaries that may or may not work on a given platform. Have |
| 144 | PLUSH and GUSH users, wish I could get them working together. Looking |
| 145 | at other resource types to connect to -- sensors, wireless, DTN. We've |
| 146 | done visualization in the past, it'd be some work to pull it together. |
| 147 | |
| 148 | ''Larry Peterson'' VINI uses an rspec for topology |
| 149 | |
| 150 | OK, Gush can handle this. |
| 151 | |
| 152 | ''Aaron Falk'' Is there a visualization engine -- maybe from Indiana |
| 153 | -- for network topologies that GUSH can use? |
| 154 | |
| 155 | ''Peter O'Neil'' Tool called Atlas. |
| 156 | |
| 157 | Google Maps API is somewhat restrictive (complex?) and ever-changing, |
| 158 | so it's a pain to keep up with it. Right now the GUI is a simple |
| 159 | OpenGL app, you can connect to a GUSH and view remotely. |
| 160 | |
| 161 | ''Vic Thomas'' Jeannie's milestones and status look great. |
| 162 | |
| 163 | == Sidebar == |
| 164 | |
| 165 | ''Aaron Falk'' All projects need to cooperate with the GMOC and the |
| 166 | security project. They are doing data collection. It's helpful to |
| 167 | provide to users a view into the health of the GENI system as a whole, |
| 168 | so gathering statistics, exporting some operational state is a good |
| 169 | thing. |
| 170 | |
| 171 | == MAX/MANFRED ''(Peter O'Neil)'' == |
| 172 | |
| 173 | A lot of our work over the course of the year is to keep the GENI |
| 174 | effort in sync with our other efforts, I2 DCN, GLIF, clusters, etc. |
| 175 | |
| 176 | ''Chip Elliot'' The GPO's view is that we'd like to fit into that |
| 177 | bigger picture. |
| 178 | |
| 179 | We had as a mileston expectation to be able to do things at an optical |
| 180 | level. Expectation that we'd be able to set up VLANs at a wave |
| 181 | level. Technology doesn't support this yet, we're too early -- |
| 182 | wavelength selectable switches are just now becoming available, not |
| 183 | yet really affortable, so we couldn't do it. John Jacob understood; |
| 184 | we're not going to get dinged for it, it's still on our schedule, just |
| 185 | pushed out. We have a number of !PlanetLab nodes running on our |
| 186 | metropolitan network (both owned by MAX and others in the area we |
| 187 | serve). We have not done any outreach to the organizations providing |
| 188 | these nodes. |
| 189 | |
| 190 | ''Chip Elliot'' Now that you have some experience with it, does the |
| 191 | notion of an aggregate make sense? |
| 192 | |
| 193 | Basically, yes. |
| 194 | |
| 195 | ''James Sterbenz'' Do you have something you consider a unified |
| 196 | !PlanetLab DCN aggregate? |
| 197 | |
| 198 | Chris Tracy and Jarda have running code, you're probably interested in |
| 199 | getting at it. |
| 200 | |
| 201 | ''Chip Elliot'' A hypothesis from a year ago was that you could |
| 202 | instantiate virtual machines and provision guaranteed bandwidth |
| 203 | between them. You're close to this, right? |
| 204 | |
| 205 | We are, soon. |
| 206 | |
| 207 | ''James Sterbenz'' Bandwidth guarantees are at a VLAN level? Our |
| 208 | switches don't support bandwidth caps, and we've already bought them, |
| 209 | can't afford to buy fancier ones. We can do best effort, though. |
| 210 | |
| 211 | ''Jon Turner'' At each site we have some spare ports on the SPPs; it |
| 212 | makes sense to connect them to ProtoGENI, but they don't have any |
| 213 | spare ports. They already have access to the wave. If it takes adding |
| 214 | an extra module to those switches to increase the number of ports, can |
| 215 | we shake the money loose? |
| 216 | |
| 217 | ''Heidi Picher Dempsey'' We already did that once, if we do it again |
| 218 | we'll be over budget. And we've pushed back twice, we need to do this |
| 219 | soon or we won't get anything running by the end of the spiral. |
| 220 | |
| 221 | Level3 is increasing prices, have limited cross-connects without |
| 222 | having to go out and pay real money to get more. Monthly fees |
| 223 | associated with each cross-connect. |
| 224 | |
| 225 | ''Heidi Picher Dempsey'' End-to-end milestones have priority. |
| 226 | |
| 227 | ''Chip Elliot'' Ideal would be by Oct 1 for GpENI, MAX, and SPP |
| 228 | interoperating. |
| 229 | |
| 230 | ''Chip Elliot'' Now that you've got !PlanetLab integrated with your |
| 231 | control framework, can we start doing this across the world? Say, |
| 232 | optical people in Europe? |
| 233 | |
| 234 | We have people using the code base (not the GENI part) around the |
| 235 | world. We're working on getting more people using it, working on |
| 236 | documentation. |
| 237 | |
| 238 | ''Chip Elliot'' JGN2 and !PlanetLab Japan would be another good place |
| 239 | to pull things together at Spiral 2. |
| 240 | |
| 241 | "GRMS" on the milestones is an anachronism, it isn't what we're doing. |
| 242 | |
| 243 | We're actually good on the two milestones due, need to do the |
| 244 | documentation. We're ready to start integration. |
| 245 | |
| 246 | ''Aaron Falk'' Are you still OK with getting a service up and running |
| 247 | on 09/01? |
| 248 | |
| 249 | Yes, I think so. And our !PlanetLab nodes will become public -- there |
| 250 | are four or five. They are not currently public, we're the only ones |
| 251 | who can use them (ISI East and MAX staff). |
| 252 | |
| 253 | ''Chip Elliot'' We believe the way it should work is that an aggregate |
| 254 | should ''affiliate'' with a clearinghouse. |
| 255 | |
| 256 | ''Larry Peterson'' ''Federated'' aggregates using the !PlanetLab |
| 257 | control framework are allowed to say no, you can't get a slice on my |
| 258 | node. (In this sense !PlanetLab is an aggregate that is affiliated with |
| 259 | the !PlanetLab control framework.) |
| 260 | |
| 261 | == RAVEN ''(John Hartman)'' == |
| 262 | |
| 263 | We're interested in what is going on inside a slice, not in |
| 264 | controlling slices. |
| 265 | |
| 266 | Working on GENIwrapper integration, RAVEN and Stork tool. Integrated |
| 267 | Stork repository with GENIwrapper -- if you want to upload a software |
| 268 | package to our repo you can do it through GENIwrapper and use |
| 269 | GENIwrapper authentication. (Don't have to log in with your |
| 270 | !PlanetLab login and password any more.) Have modified SFI package to |
| 271 | make it possible to upload packages to the Stork repository -- and |
| 272 | then make it available to !PlanetLab machines. |
| 273 | |
| 274 | ''Chip Elliot'' What about packages that don't look like !PlanetLab |
| 275 | packages? |
| 276 | |
| 277 | Can have different kinds of nodes, e.g. SPP nodes, and say this kind |
| 278 | of node needs that kind of software. You can create a group of nodes |
| 279 | that satisfies a !CoMon query. |
| 280 | |
| 281 | ''Larry Peterson'' Chip, you want groupings that are independent of |
| 282 | aggregate groupings. ''John Hartman'' has node groups, which seems to |
| 283 | do what you want. |
| 284 | |
| 285 | On slice management front, somewhat integrated with GUSH. Haven't |
| 286 | demoed it. We're using GUSH inside the Stork tool, in a pub-sub |
| 287 | system, so nodes that are part of a slice can see that there is an |
| 288 | update to their package and can reinstall themselves. |
| 289 | |
| 290 | ''Chip Elliot'' You're running a valuable service there... are you |
| 291 | going to do it indefinitely? |
| 292 | |
| 293 | We're trying to get out of the repository business, just deal with a |
| 294 | database of URLs. People publishing packages have to put the packages |
| 295 | up themselves, make them available, then publish the URL via Stork. |
| 296 | |
| 297 | Working on some other stuff. Can create groups based on !CoMon queries, |
| 298 | but the code is kind of dusty, needs to be brought up to date. |
| 299 | Rudimentary monitoring service to monitor what's going on inside a |
| 300 | slice. Basically wanted to monitor the health of Stork; generalizing |
| 301 | it and making it available. |
| 302 | |
| 303 | ''Larry Peterson'' Is there stuff you'd like to see in !CoMon that's |
| 304 | not there? |
| 305 | |
| 306 | We should talk about that. |
| 307 | |
| 308 | ''Jeanne Albrecht'' We use !CoMon -- if you cleaned up the API we'd |
| 309 | take advantage of it. |
| 310 | |
| 311 | Milestones look good. |
| 312 | |
| 313 | ''Larry Peterson'' !CoMon works by installing on a slice that spans the |
| 314 | machines it is monitoring. |
| 315 | |
| 316 | ''Larry Peterson'' Once we have this it'll be an environment that's |
| 317 | richer than the Amazon EC2. Using the same tools we can upload |
| 318 | images, allocate slices, and run them. |
| 319 | |
| 320 | == Enterprise GENI ''(Guido Appenzeller)'' == |
| 321 | |
| 322 | ''Chip Elliot'' What does "integrate with switched VLAN in I2?" |
| 323 | |
| 324 | We want to provide layer 2 to experimenters. Connect that in an |
| 325 | Internet2 pop with a VLAN -- outside of !OpenFlow. Demo for GEC6. |
| 326 | |
| 327 | ''Jon Turner'' Do your NetFPGAs have any spare ports? That would be |
| 328 | helpful. |
| 329 | |
| 330 | If not, we'll figure out how to make some available. |
| 331 | |
| 332 | Plan on integrating with !PlanetLab "clearinghouse." |
| 333 | |
| 334 | We've written our own Aggregate Manager, speaks lightweight protocol |
| 335 | as defined at Denver meeting. Automatically discovers and reports |
| 336 | switches and network toplogy and reports to clearinghouse via |
| 337 | RSpect. Can virtualize Stanford !OpenFlow network based on reservation |
| 338 | rspec received from clearinghouse. |
| 339 | |
| 340 | ''Chip Elliot'' What is in your RSPecs? All we're looking for in |
| 341 | spiral 1 and spiral 2 we'll want VLANs and tunnels. |
| 342 | |
| 343 | Goal for this spiral is "here are your three options to connect" and |
| 344 | hard code them. |
| 345 | |
| 346 | ''Chip Elliot'' In GEC6 we want to show experiments running -- not |
| 347 | just demos. |
| 348 | |
| 349 | Having ''meaningful'' experiments by November across aggregates will |
| 350 | be difficult to do. Maybe ''demo'' experiments, not ''meaningful'' |
| 351 | experiments. |
| 352 | |
| 353 | ''Heidi Picher Dempsey'' For Spiral 2 for this project we were |
| 354 | thinking about limited opt-in, one at Stanford one at Princeton. |
| 355 | |
| 356 | We're working on a mechanism where first we put an opt-in end user on |
| 357 | a VLAN, then when a user opts in we move them into the experiment's |
| 358 | VLAN. The !OpenFlow switches are installed, but production traffic is |
| 359 | not using the !OpenFlow ports (using the regular ports). |
| 360 | |
| 361 | In Gates building, 3A wing only, five switches (HP and NEC), 25 |
| 362 | wireless APs, ~25 daily users. 2nd phase is all of Gates building, 23 |
| 363 | switches (HP !ProCurve 5400), hundreds of users. Phase 3, 2H2009, |
| 364 | Packard and CIS buildings as well, number of switches TBD (HP !ProCurve |
| 365 | 5400), > 1000 users. |
| 366 | |
| 367 | We have built our own toy clearninghouse to support the functions we |
| 368 | need, rather than keep asking Larry to add functions. |
| 369 | |
| 370 | ''Aaron Falk'' Is there a plan to get your needs covered by the |
| 371 | PlanetLab design? |
| 372 | |
| 373 | I'll get to that soon, hold on. |
| 374 | |
| 375 | Expand !OpenFlow substrate to 7-8 other campuses. Multiple vendors |
| 376 | (Cisco, HP, NEC, Toroki, Arista, Juniper) have agreed to |
| 377 | participate. Goal is to virtualize production infrastructure and allow |
| 378 | experiments on this infrastructure via GENI. Goal is >100 switches, |
| 379 | 5000 ports. |
| 380 | |
| 381 | Fundamental integration challenge for GENI is that we have very |
| 382 | different substrates. Types of nodes, switches, layer 1, 2, 3, 4, |
| 383 | ... How do you define all of this in RSpecs? This affects the |
| 384 | clearinghouse; how does it apply policy? Detect conflicts? Present |
| 385 | options to users? Help users resolve conflicts? How do clearinghouses |
| 386 | manage this complexity? How do they keep up with rapid change? |
| 387 | |
| 388 | Substrates will drive clearinghouse requirements. At this point we |
| 389 | couldn't define a stable RSpec, as we don't know what we need yet. So |
| 390 | maybe we should have individual clearinghouses. |
| 391 | |
| 392 | ''Larry Peterson'' Maybe what you mean individual aggregates, not |
| 393 | clearinghouses. |
| 394 | |
| 395 | ''Chip Elliot'' Think of Amazon.com -- they don't know what they are |
| 396 | selling, they have prices and pictures and descriptions. |
| 397 | |
| 398 | Amazon.com can't sell airline tickets, too many options. Reserving a |
| 399 | network slice is not commoditized, at least not yet. It'd be nice to |
| 400 | have an Amazon.com like interface. Can't expect someone who writes a |
| 401 | clearinghouse to manage all of the complexity of all of the |
| 402 | substrates; the parties building the substrates should manage the |
| 403 | complexity. |
| 404 | |
| 405 | ''Larry Peterson'' But now users have 20 different UIs to deal with -- |
| 406 | one for each substrate. |
| 407 | |
| 408 | ''Chip Elliot'' We all agree that it's not the role of the |
| 409 | clearinghouse to understand everything about everything. |
| 410 | |
| 411 | ''Larry Peterson'' It'd be great if sfi had a GUI in front of it such |
| 412 | that when a set of rspecs came back they were presented as a list |
| 413 | sorted by aggregates and you go through each aggregate picking what |
| 414 | you want. |
| 415 | |
| 416 | ''Chip Elliot'' We really want you to integrate into a control |
| 417 | framework. A year ago we didn't understand how difficult it would be; |
| 418 | now we do. We'd like you to integrate, even if it's a very thin |
| 419 | veneer. |
| 420 | |
| 421 | ''Heidi Picher Dempsey'' Milestones are marked not done, this may be a |
| 422 | difference of opinion. Maybe all we need is to close the loop, get |
| 423 | some documentation, make the code public, etc. |
| 424 | |
| 425 | == GpENI ''(James Sterbenz)'' == |
| 426 | |
| 427 | Spent the first 9 months getting stuff up and running. Four nodes were |
| 428 | up and demoed by last GEC; were cheating with one of them because one |
| 429 | node's connectivity was broken at the time, but now it's running |
| 430 | correctly. Since GEC4 been working on VINI. Just last week Katerina |
| 431 | downloaded GENIwrapper and is starting to work with it. About seven, |
| 432 | eight !PlanetLab nodes. We are not affiliated with !PlanetLab yet, but |
| 433 | we would set up slices for people if they asked. We are running our |
| 434 | own PLC. |
| 435 | |
| 436 | ''Larry Peterson'' So when you affiliate, people connect to the slice |
| 437 | manager they will gain access to your nodes. |
| 438 | |
| 439 | DCN has been ported to our Netgear switch, waiting for an interface |
| 440 | from Ciena, and then we'll put it on line. It'll be controlled by |
| 441 | DRAGON eventually; we need to get a Ciena 4200, but that switch isn't |
| 442 | available yet. We have the equipment but not the software to create |
| 443 | dyanmically established circuits. Although we don't have the ability |
| 444 | to put in bandwidth limits. Currently configuring the year2 switch, |
| 445 | which will go to KU. Cienna 4200 doesn't have DCN drivers at this |
| 446 | time. |
| 447 | |
| 448 | ''Aaron Falk'' This is shared responsibility, need to help !PlanetLab |
| 449 | control framework mature. |
| 450 | |
| 451 | Right, we understand that, we have always said that as soon as it |
| 452 | becomes available we'll grab it, and we've done that. |
| 453 | |
| 454 | ''Jeanne Albrecht'' We use GENIwrapper, it's working pretty well. |
| 455 | |
| 456 | Haven't really been in touch with the GMOC folks yet. |
| 457 | |
| 458 | How to do dynamic circuits? |
| 459 | |
| 460 | ''Larry Peterson'' Two approaches: either one Aggregate Manager |
| 461 | controls nodes and network together or have one Aggregate Manager for |
| 462 | each. not sure you need to support both. |
| 463 | |
| 464 | Planning to use MAX code / model. |
| 465 | |
| 466 | == SPP ''(Jon Turner)'' == |
| 467 | |
| 468 | Project goal is to acquire and deploy five SPP nodes in Internet2 |
| 469 | POPs. Three nodes in Salt Lake City, Kansas City, and Washington |
| 470 | DC. Houston and Atlanta will be added later. 10 x 1GbE, network |
| 471 | processor subsystem, two GPP engines (server blades) that run the |
| 472 | PlanetLab environment, separate control processor with a NetFPGA. User |
| 473 | training and support, consulting and development of new code |
| 474 | options. System software to support GENI-compatible control interface. |
| 475 | |
| 476 | Deliverables: develop initial version of component interface software |
| 477 | "matching GENI framework" and demonstrate on SPP nodes in WUSTL lab. |
| 478 | |
| 479 | ''Aaron Falk'' How are resources partitioned? |
| 480 | |
| 481 | On a node you run in a vserver. If you want fastpath on the network |
| 482 | processor, you request the resources you want (bandwidth, ports, |
| 483 | queues, ...) Important to understand is that a reservation isn't just |
| 484 | "can I get this now" but also "can I get this tomorrow from 0200 to |
| 485 | 0900?" Although we don't currently have a way to retract resources |
| 486 | once given. Every node has a standalone reservation system. |
| 487 | |
| 488 | ''Peter O'Neil'' Wasn't there an issue six months ago about VLAN |
| 489 | translation? |
| 490 | |
| 491 | Will come back to that. Original deliverable was to deploy two SPP |
| 492 | nodes, we're going to make it three. Plan is to make this initial |
| 493 | deployment available by end of Spiral 1, so it should be available to |
| 494 | external researchers in the quarter after that. Architectre and |
| 495 | design doc to GPO by month 9, SPP component manager interface |
| 496 | documentation to GPO by month 12. Don't expect to have it all |
| 497 | implemented by the end of spiral 1, probably not until the end of |
| 498 | spiral 2. |
| 499 | |
| 500 | Current version of the reservation system doesn't allow you to reserve |
| 501 | external port numbers, but don't have a nice way to go to a vserver |
| 502 | running in a GPE and force it to close a socket. |
| 503 | |
| 504 | ''Larry Peterson'' Vserver reboot works... |
| 505 | |
| 506 | ''Chip Elliot'' So you need to know the code that will run on the |
| 507 | processor to make the reservation. |
| 508 | |
| 509 | Flow monitoring, so you can tell which experiment is sending out |
| 510 | packets to a host on the internet. Version 2 of Network Processor |
| 511 | Datapath Software pushed back, toward the end of the year. |
| 512 | |
| 513 | Y2: Carryover from Y1, Continue dev of component interface sw |
| 514 | (integrate fast path and inter-node link bandwith reservation into |
| 515 | GENI management framework, use RSpecs for making multi-node |
| 516 | reservations), documentation, tutorials, sample applications using SPP |
| 517 | fast paths, support. |
| 518 | |
| 519 | If time permits: implement control software for NetFPGAs, terminate |
| 520 | VLANs to MAX, GpENI, Stanford (?). Physical connection -- can we do it |
| 521 | in time? What do VLANs connect to? Static or dynamic? |
| 522 | |
| 523 | SPP nodes will not set up VLANs dynamically. |
| 524 | |
| 525 | More if time permits: Cool demos at GECs (OpenFlow in a slice), |
| 526 | demonstrating slice management with GUSH and RAVEN, eliminating IP |
| 527 | tunnels where L2 connections exist, transition management to IU group, |
| 528 | expand NP capabilities (Netronome 40 cores on low cost PCIx card). |
| 529 | |
| 530 | Where else might we put SPP nodes other than IPP PoP? |
| 531 | |
| 532 | ''Peter O'Neil'' Maybe get some hosting space, we sometimes act as a |
| 533 | RON and provide access to elsewhere. This can be lower cost, but not |
| 534 | free. Campuses would be better, most RONs don't have space. |
| 535 | |
| 536 | Can't get much cheaper than Internet2 (we're paying nothing) although |
| 537 | it's effort to work with them. |
| 538 | |
| 539 | ''Heidi Picher Dempsey'' hosting centers are much cheaper than |
| 540 | Internet2 space. |
| 541 | |
| 542 | == GPO Spiral 2 Vision == |
| 543 | |
| 544 | Spiral 1 had two goals -- control framework that controls a lot of |
| 545 | stuff, and create end-to-end slices across aggregates. Spiral 2 only |
| 546 | one big goal -- get continuous big real research projects |
| 547 | running. This is where the rubber hits the road; is anyone interested? |
| 548 | Can we really make it work? |
| 549 | |
| 550 | ''Chip Elliot'' ''Operations'' becomes a big part of this, getting |
| 551 | things up and running and keeping them running. This is a change from |
| 552 | what you're used to. |
| 553 | |
| 554 | ''Chip Elliot'' Spiral 2 is an opportunity to see which of the things |
| 555 | we've developed are of interest to people. It's easy to build |
| 556 | infrastructure that nobody uses; we want to see, of what we've built, |
| 557 | what people want to use. |
| 558 | |
| 559 | ''Aaron Falk'' Documentation, sample experiments, tutorials, users |
| 560 | workshop. Want to help get researchers using the infrastructure. So |
| 561 | here are some candidate ideas of what to include in your SoW for next |
| 562 | year, to help us reach Spiral 2 goals. We're going to be pressing |
| 563 | people toward Spiral 2 goals, prioritize based on the GENI goals. You |
| 564 | might be interested in doing some work that'll be good for your |
| 565 | aggregate or cluster, but priority will be given to tasks that further |
| 566 | the Spiral 2 goals. |
| 567 | |
| 568 | ''Chip Elliot'' Instrumentation and measurement: everyone has agreed |
| 569 | that this environment will be really well instrumented. But we have |
| 570 | very few efforts focused on this in Spiral 1. |
| 571 | |
| 572 | ''Larry Peterson'' I see instrumentation largely as a slice issue. |
| 573 | |
| 574 | ''Aaron Falk'' For many experiments you want the instrumentation to |
| 575 | have minimal impact. Also want some extra-slice instrumentation, |
| 576 | e.g. BER on a link that you're running over. |
| 577 | |
| 578 | ''Chip Elliot'' Is it a researcher's job to resynchronize all of the |
| 579 | clocks? |
| 580 | |
| 581 | ''Larry Peterson'' There are useful common servivces -- logging, |
| 582 | archiving, time synchronization, ... that we should provide. The work |
| 583 | of instrumenting is then a slice issue. |
| 584 | |
| 585 | ''Heidi Picher Dempsey'' After you've collected data, you want to be |
| 586 | able to share it, too. |
| 587 | |
| 588 | ''Larry Peterson'' !MeasurementLab has three pieces; tools, embargoing |
| 589 | data, platform. |
| 590 | |
| 591 | ''Aaron Falk'' Negotiate with your system engineer to work out |
| 592 | milestones. |
| 593 | |
| 594 | ''Chip Elliot'' We also want identity management systems that are not |
| 595 | tool-specific. We're advocating that we leverage other people's work |
| 596 | for this. Currently recommending Shibboleth and !InCommon for "single |
| 597 | sign-on." |
| 598 | |
| 599 | ''Larry Peterson'' Immediate reaction is that it's non-trivial |
| 600 | programming effort. |
| 601 | |
| 602 | ''Guido Appenzeller'' From our point of view, we trust the clearinghouse. |
| 603 | |
| 604 | ''Guido Appenzeller'' By centralizing services we simplify things, |
| 605 | right? It relieves you of a lot of the identity management work. |
| 606 | |
| 607 | ''Chip Elliot'' I agree with that argument, but there is also the |
| 608 | argument that there is benefit in being able to allocate and redeem |
| 609 | tickets. |
| 610 | |
| 611 | ''Aaron Falk'' If we go to a centralized trust model, will that |
| 612 | preclude outsiders using these resources? |
| 613 | |
| 614 | ''Chip Elliot'' But there will always be pairwise trust as well. |
| 615 | |
| 616 | ''Chip Elliot'' Integration and interoperability. Integration has to |
| 617 | continue, it won't all be working by October 1. How many control |
| 618 | frameworks will there be when the dust settles? It's a big question |
| 619 | for Spiral 2. |
| 620 | |
| 621 | ''James Sterbenz'' Getting integrated with one control framework was |
| 622 | hard -- how am I going to interoperate with multiple? |
| 623 | |
| 624 | ''Larry Peterson'' ProtoGENI and !PlanetLab share history in the |
| 625 | SFA. Maybe we can bring them together, TIED as well. |
| 626 | |
| 627 | ''Chip Elliot'' My view is that by October 1 we'll have enough |
| 628 | experience with these control frameworks to determine what we can |
| 629 | do. We may determine that nobody wants to do it. Or maybe we can unify |
| 630 | things enough that not every aggregate will have to implement two or |
| 631 | three interfaces. |