Changes between Initial Version and Version 1 of ClusterBProjReview20090630Notes


Ignore:
Timestamp:
07/02/09 11:06:05 (15 years ago)
Author:
Christopher Small
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • ClusterBProjReview20090630Notes

    v1 v1  
     1== !PlanetLab ''(Larry Peterson)'' ==
     2
     3Main modues from SFA are Slice Manager Interface, Registry Interface,
     4O&M Interface. First two will be standardized, it's thought that the
     5third will be defined per-aggregate. Two interfaces, currently
     6xml-rpc, going to support WSDL/SOAP.
     7
     8Slice Facility Interface (sfi) calls into Slice Manager,
     9Registry. Behind Slice Manager will be some number of aggregates,
     10right now !PlanetLab is the only aggregate. Skeletal version for
     11anyone who has a !PlanetLab codebase aggregate. A interface is same as
     12Slice Manager interface. If you're running a MyPLC-based interface
     13their code will run in top of it. Collectively based on the
     14slice-facility architecture.
     15
     16Two versions: SFA-full and SFA-lite.  SFA-lite is SFA-full minus
     17credentials, so you can build an aggregate that depends on the
     18front-end having authenticated the caller.
     19
     20Have SFI, Slice Manager, Registry, Aggregage Manager. WSDL interface,
     21and a lite version (from Enterprise GENI).
     22
     23Minimal ops to implement for Aggregate Manager interface for your
     24Aggregate -- !GetResources() returns RSpecs, !CreateSlice() allocates
     25slice, !DeleteSlice(), !ListSlices(). Plus a minimal RSpec; can be
     26aggregate specific (basically an XSD, basically what you need as an
     27argument to !CreateSlice()).
     28
     29''John Hartman'' What's the deal on SFA lite?
     30
     31You trust the Slice Manager and Registry to do security.
     32
     33PLC API is supported by each component manager.  Aggregate (PLC) has
     34back door into component, using a private interface.
     35
     36Ticket call and tickets are implemented; usually go though
     37!CreateSlice(), not !GetTicket().
     38
     39''Chip Elliot'' How do you create topologies (e.g. for GpENI, MAX)?
     40
     41Aggregates return a set of RSpecs, which are opaque to Slice
     42Manager. User picks from the set of RSpecs, Slice Manager passes them
     43back down. Larry's assumption is that the RSpecs will have the
     44information about how to create a topology. You'll go and edit the XML
     45file (either directly or via a tool) to put together the RSpec set
     46that you want.
     47
     48''Guido Appenzeller'' !FlowSpecs will need some opt-in description of
     49how this aggregate connects to the internet at large.
     50
     51We don't have opt-in at the moment.
     52
     53The set { sfi, Slice Manager, Registry } is the clearing house, from
     54my perspective.  It's the portal that the user sees. There is no
     55central record of slices -- you can talk to an aggregate directly, so
     56there is no single point where you can find a list of all allocated
     57slices.
     58
     59''Aaron Falk'' Clearinghouse is supposed to have ther definitive
     60record of who did what when; it's the place where researchers and
     61resources meet.
     62
     63Keep in mind that aggregates can hand out resources independently, so
     64there ''is'' no definitive record of who did what (and who is doing
     65what).
     66
     67As a first approximation we are in the busines of debugging aggregates
     68and our code as aggregates plug in to the framework.
     69
     70VINI is an ongoing MRI (?) project. Has developed a new kernel, uses
     71the !PlanetLab control framework. !PlanetLab will eventually adopt the
     72VINI kernel. The VINI kernel lets people own their own IP stack,
     73available on !PlanetLab within the next three months.
     74
     75Outside the people in this room, only the Europeans (PlanetLab Europe)
     76are using the geniwrapper interface.
     77
     78I don't understand which resources are "GENI resources". There are
     79resources provided by aggregates, and agreements between users and
     80aggregates -- and possibly peering agreements between organizations
     81controlling resources, etc. But GENI doesn't own anything itself.
     82
     83''Aaron Falk'' Looking at your milestones, they are marked as
     84complete, looks good.
     85
     86''Chip Elliot'' As we review milestones they may look archaic because
     87we drew them up a year ago.
     88
     89''Aaron Falk'' Want layer 2 connections available to researchers
     90("Establish direct connectivity between project facilities and
     91Internet2").
     92
     93''Heidi Picher Dempsey'' Intent was you could specify end-to-end VLANs
     94on Internet2.
     95
     96We'll provide end-to-end IP, based on last week's meeting.
     97
     98''Chip Elliot'' Can you stitch the dataplane so you can run non-IP
     99traffic?
     100
     101It may be a layer 2 connection, but it'll be tunneled over IP.
     102
     103''Chip Elliot'' Two major goals for spiral 1: control framework and
     104cross-aggregate slices, and end-to-end layer 2 VLANs.
     105
     106''Aaron Falk'' Another topic Heidi and Aaron talked about; desire to
     107keep control plane for GENI on GENI resources. Control plane will be
     108over IP; if control plane stops working, everything falls over.
     109
     110Access to control plane is via IP. Resources underneath, depends on
     111how you connect to it.
     112
     113''Heidi Picher Dempsey'' We had hoped that all of the aggregates would
     114be one hop away from Internet2, and run everything over GENI.
     115
     116This strikes me as overambitious and possibly counterproductive
     117requirement.
     118
     119''Guido Appenzeller'' We currently have less faith in our
     120infrastructure than in the internet.  I'd rather run my control
     121traffic over the internet.
     122
     123''Chip Elliot'' Do you envision Princeton being connected to Internet2
     124and/or NLR at layer 2?
     125
     126It's out of our control, but it's possible.
     127
     128''Chip Elliot'' GPO would prefer that everyone went direct.
     129
     130Rest of the milestones seems reasonable.
     131
     132== GUSH ''(Jeanne Albrecht)'' ==
     133
     134Couple of students working this summer. Starting to work with GENI
     135wrapper, have worked with PLC interface for a while. Another student
     136working on a GUI.  Teaching distributed systems in the fall, getting
     137students to use it.
     138
     139''Aaron Falk'' Do you have users outside?
     140
     141Yes, and it's taking up a lot of time. GUSH is a giant C++ package at
     142the moment, takes some real work to build it, so we provide statically
     143compiled binaries that may or may not work on a given platform. Have
     144PLUSH and GUSH users, wish I could get them working together. Looking
     145at other resource types to connect to -- sensors, wireless, DTN. We've
     146done visualization in the past, it'd be some work to pull it together.
     147
     148''Larry Peterson'' VINI uses an rspec for topology
     149
     150OK, Gush can handle this.
     151
     152''Aaron Falk'' Is there a visualization engine -- maybe from Indiana
     153-- for network topologies that GUSH can use?
     154
     155''Peter O'Neil'' Tool called Atlas.
     156
     157Google Maps API is somewhat restrictive (complex?)  and ever-changing,
     158so it's a pain to keep up with it. Right now the GUI is a simple
     159OpenGL app, you can connect to a GUSH and view remotely.
     160
     161''Vic Thomas'' Jeannie's milestones and status look great.
     162
     163== Sidebar ==
     164
     165''Aaron Falk'' All projects need to cooperate with the GMOC and the
     166security project. They are doing data collection. It's helpful to
     167provide to users a view into the health of the GENI system as a whole,
     168so gathering statistics, exporting some operational state is a good
     169thing.
     170
     171== MAX/MANFRED ''(Peter O'Neil)'' ==
     172
     173A lot of our work over the course of the year is to keep the GENI
     174effort in sync with our other efforts, I2 DCN, GLIF, clusters, etc.
     175
     176''Chip Elliot'' The GPO's view is that we'd like to fit into that
     177bigger picture.
     178
     179We had as a mileston expectation to be able to do things at an optical
     180level. Expectation that we'd be able to set up VLANs at a wave
     181level. Technology doesn't support this yet, we're too early --
     182wavelength selectable switches are just now becoming available, not
     183yet really affortable, so we couldn't do it. John Jacob understood;
     184we're not going to get dinged for it, it's still on our schedule, just
     185pushed out. We have a number of !PlanetLab nodes running on our
     186metropolitan network (both owned by MAX and others in the area we
     187serve). We have not done any outreach to the organizations providing
     188these nodes.
     189
     190''Chip Elliot'' Now that you have some experience with it, does the
     191notion of an aggregate make sense?
     192
     193Basically, yes.
     194
     195''James Sterbenz'' Do you have something you consider a unified
     196!PlanetLab DCN aggregate?
     197
     198Chris Tracy and Jarda have running code, you're probably interested in
     199getting at it.
     200
     201''Chip Elliot'' A hypothesis from a year ago was that you could
     202instantiate virtual machines and provision guaranteed bandwidth
     203between them. You're close to this, right?
     204
     205We are, soon.
     206
     207''James Sterbenz'' Bandwidth guarantees are at a VLAN level? Our
     208switches don't support bandwidth caps, and we've already bought them,
     209can't afford to buy fancier ones. We can do best effort, though.
     210
     211''Jon Turner'' At each site we have some spare ports on the SPPs; it
     212makes sense to connect them to ProtoGENI, but they don't have any
     213spare ports. They already have access to the wave. If it takes adding
     214an extra module to those switches to increase the number of ports, can
     215we shake the money loose?
     216
     217''Heidi Picher Dempsey'' We already did that once, if we do it again
     218we'll be over budget. And we've pushed back twice, we need to do this
     219soon or we won't get anything running by the end of the spiral.
     220
     221Level3 is increasing prices, have limited cross-connects without
     222having to go out and pay real money to get more. Monthly fees
     223associated with each cross-connect.
     224
     225''Heidi Picher Dempsey'' End-to-end milestones have priority.
     226
     227''Chip Elliot'' Ideal would be by Oct 1 for GpENI, MAX, and SPP
     228interoperating.
     229
     230''Chip Elliot'' Now that you've got !PlanetLab integrated with your
     231control framework, can we start doing this across the world? Say,
     232optical people in Europe?
     233
     234We have people using the code base (not the GENI part) around the
     235world.  We're working on getting more people using it, working on
     236documentation.
     237
     238''Chip Elliot'' JGN2 and !PlanetLab Japan would be another good place
     239to pull things together at Spiral 2.
     240
     241"GRMS" on the milestones is an anachronism, it isn't what we're doing.
     242
     243We're actually good on the two milestones due, need to do the
     244documentation.  We're ready to start integration.
     245
     246''Aaron Falk'' Are you still OK with getting a service up and running
     247on 09/01?
     248
     249Yes, I think so. And our !PlanetLab nodes will become public -- there
     250are four or five. They are not currently public, we're the only ones
     251who can use them (ISI East and MAX staff).
     252
     253''Chip Elliot'' We believe the way it should work is that an aggregate
     254should ''affiliate'' with a clearinghouse.
     255
     256''Larry Peterson'' ''Federated'' aggregates using the !PlanetLab
     257control framework are allowed to say no, you can't get a slice on my
     258node. (In this sense !PlanetLab is an aggregate that is affiliated with
     259the !PlanetLab control framework.)
     260
     261== RAVEN ''(John Hartman)'' ==
     262
     263We're interested in what is going on inside a slice, not in
     264controlling slices.
     265
     266Working on GENIwrapper integration, RAVEN and Stork tool. Integrated
     267Stork repository with GENIwrapper -- if you want to upload a software
     268package to our repo you can do it through GENIwrapper and use
     269GENIwrapper authentication.  (Don't have to log in with your
     270!PlanetLab login and password any more.) Have modified SFI package to
     271make it possible to upload packages to the Stork repository -- and
     272then make it available to !PlanetLab machines.
     273
     274''Chip Elliot'' What about packages that don't look like !PlanetLab
     275packages?
     276
     277Can have different kinds of nodes, e.g. SPP nodes, and say this kind
     278of node needs that kind of software. You can create a group of nodes
     279that satisfies a !CoMon query.
     280
     281''Larry Peterson'' Chip, you want groupings that are independent of
     282aggregate groupings. ''John Hartman'' has node groups, which seems to
     283do what you want.
     284
     285On slice management front, somewhat integrated with GUSH. Haven't
     286demoed it. We're using GUSH inside the Stork tool, in a pub-sub
     287system, so nodes that are part of a slice can see that there is an
     288update to their package and can reinstall themselves.
     289
     290''Chip Elliot'' You're running a valuable service there... are you
     291going to do it indefinitely?
     292
     293We're trying to get out of the repository business, just deal with a
     294database of URLs. People publishing packages have to put the packages
     295up themselves, make them available, then publish the URL via Stork.
     296
     297Working on some other stuff. Can create groups based on !CoMon queries,
     298but the code is kind of dusty, needs to be brought up to date.
     299Rudimentary monitoring service to monitor what's going on inside a
     300slice. Basically wanted to monitor the health of Stork; generalizing
     301it and making it available.
     302
     303''Larry Peterson'' Is there stuff you'd like to see in !CoMon that's
     304not there?
     305
     306We should talk about that.
     307
     308''Jeanne Albrecht'' We use !CoMon -- if you cleaned up the API we'd
     309take advantage of it.
     310
     311Milestones look good.
     312
     313''Larry Peterson'' !CoMon works by installing on a slice that spans the
     314machines it is monitoring.
     315
     316''Larry Peterson'' Once we have this it'll be an environment that's
     317richer than the Amazon EC2.  Using the same tools we can upload
     318images, allocate slices, and run them.
     319
     320== Enterprise GENI ''(Guido Appenzeller)'' ==
     321
     322''Chip Elliot'' What does "integrate with switched VLAN in I2?"
     323
     324We want to provide layer 2 to experimenters.  Connect that in an
     325Internet2 pop with a VLAN -- outside of !OpenFlow. Demo for GEC6.
     326
     327''Jon Turner'' Do your NetFPGAs have any spare ports? That would be
     328helpful.
     329
     330If not, we'll figure out how to make some available.
     331
     332Plan on integrating with !PlanetLab "clearinghouse."
     333
     334We've written our own Aggregate Manager, speaks lightweight protocol
     335as defined at Denver meeting. Automatically discovers and reports
     336switches and network toplogy and reports to clearinghouse via
     337RSpect. Can virtualize Stanford !OpenFlow network based on reservation
     338rspec received from clearinghouse.
     339
     340''Chip Elliot'' What is in your RSPecs? All we're looking for in
     341spiral 1 and spiral 2 we'll want VLANs and tunnels.
     342
     343Goal for this spiral is "here are your three options to connect" and
     344hard code them.
     345
     346''Chip Elliot'' In GEC6 we want to show experiments running -- not
     347just demos.
     348
     349Having ''meaningful'' experiments by November across aggregates will
     350be difficult to do. Maybe ''demo'' experiments, not ''meaningful''
     351experiments.
     352
     353''Heidi Picher Dempsey'' For Spiral 2 for this project we were
     354thinking about limited opt-in, one at Stanford one at Princeton.
     355
     356We're working on a mechanism where first we put an opt-in end user on
     357a VLAN, then when a user opts in we move them into the experiment's
     358VLAN. The !OpenFlow switches are installed, but production traffic is
     359not using the !OpenFlow ports (using the regular ports).
     360
     361In Gates building, 3A wing only, five switches (HP and NEC), 25
     362wireless APs, ~25 daily users. 2nd phase is all of Gates building, 23
     363switches (HP !ProCurve 5400), hundreds of users. Phase 3, 2H2009,
     364Packard and CIS buildings as well, number of switches TBD (HP !ProCurve
     3655400), > 1000 users.
     366
     367We have built our own toy clearninghouse to support the functions we
     368need, rather than keep asking Larry to add functions.
     369
     370''Aaron Falk'' Is there a plan to get your needs covered by the
     371PlanetLab design?
     372
     373I'll get to that soon, hold on.
     374
     375Expand !OpenFlow substrate to 7-8 other campuses. Multiple vendors
     376(Cisco, HP, NEC, Toroki, Arista, Juniper) have agreed to
     377participate. Goal is to virtualize production infrastructure and allow
     378experiments on this infrastructure via GENI. Goal is >100 switches,
     3795000 ports.
     380
     381Fundamental integration challenge for GENI is that we have very
     382different substrates.  Types of nodes, switches, layer 1, 2, 3, 4,
     383... How do you define all of this in RSpecs? This affects the
     384clearinghouse; how does it apply policy?  Detect conflicts? Present
     385options to users? Help users resolve conflicts? How do clearinghouses
     386manage this complexity? How do they keep up with rapid change?
     387
     388Substrates will drive clearinghouse requirements. At this point we
     389couldn't define a stable RSpec, as we don't know what we need yet. So
     390maybe we should have individual clearinghouses.
     391
     392''Larry Peterson'' Maybe what you mean individual aggregates, not
     393clearinghouses.
     394
     395''Chip Elliot'' Think of Amazon.com -- they don't know what they are
     396selling, they have prices and pictures and descriptions.
     397
     398Amazon.com can't sell airline tickets, too many options. Reserving a
     399network slice is not commoditized, at least not yet. It'd be nice to
     400have an Amazon.com like interface. Can't expect someone who writes a
     401clearinghouse to manage all of the complexity of all of the
     402substrates; the parties building the substrates should manage the
     403complexity.
     404
     405''Larry Peterson'' But now users have 20 different UIs to deal with --
     406one for each substrate.
     407
     408''Chip Elliot'' We all agree that it's not the role of the
     409clearinghouse to understand everything about everything.
     410
     411''Larry Peterson'' It'd be great if sfi had a GUI in front of it such
     412that when a set of rspecs came back they were presented as a list
     413sorted by aggregates and you go through each aggregate picking what
     414you want.
     415
     416''Chip Elliot'' We really want you to integrate into a control
     417framework. A year ago we didn't understand how difficult it would be;
     418now we do. We'd like you to integrate, even if it's a very thin
     419veneer.
     420
     421''Heidi Picher Dempsey'' Milestones are marked not done, this may be a
     422difference of opinion. Maybe all we need is to close the loop, get
     423some documentation, make the code public, etc.
     424
     425== GpENI ''(James Sterbenz)'' ==
     426
     427Spent the first 9 months getting stuff up and running. Four nodes were
     428up and demoed by last GEC; were cheating with one of them because one
     429node's connectivity was broken at the time, but now it's running
     430correctly. Since GEC4 been working on VINI. Just last week Katerina
     431downloaded GENIwrapper and is starting to work with it. About seven,
     432eight !PlanetLab nodes. We are not affiliated with !PlanetLab yet, but
     433we would set up slices for people if they asked. We are running our
     434own PLC.
     435
     436''Larry Peterson'' So when you affiliate, people connect to the slice
     437manager they will gain access to your nodes.
     438
     439DCN has been ported to our Netgear switch, waiting for an interface
     440from Ciena, and then we'll put it on line. It'll be controlled by
     441DRAGON eventually; we need to get a Ciena 4200, but that switch isn't
     442available yet.  We have the equipment but not the software to create
     443dyanmically established circuits.  Although we don't have the ability
     444to put in bandwidth limits.  Currently configuring the year2 switch,
     445which will go to KU.  Cienna 4200 doesn't have DCN drivers at this
     446time.
     447
     448''Aaron Falk'' This is shared responsibility, need to help !PlanetLab
     449control framework mature.
     450
     451Right, we understand that, we have always said that as soon as it
     452becomes available we'll grab it, and we've done that.
     453
     454''Jeanne Albrecht'' We use GENIwrapper, it's working pretty well.
     455
     456Haven't really been in touch with the GMOC folks yet.
     457
     458How to do dynamic circuits? 
     459
     460''Larry Peterson'' Two approaches: either one Aggregate Manager
     461controls nodes and network together or have one Aggregate Manager for
     462each.  not sure you need to support both.
     463
     464Planning to use MAX code / model.
     465
     466== SPP ''(Jon Turner)'' ==
     467
     468Project goal is to acquire and deploy five SPP nodes in Internet2
     469POPs. Three nodes in Salt Lake City, Kansas City, and Washington
     470DC. Houston and Atlanta will be added later. 10 x 1GbE, network
     471processor subsystem, two GPP engines (server blades) that run the
     472PlanetLab environment, separate control processor with a NetFPGA. User
     473training and support, consulting and development of new code
     474options. System software to support GENI-compatible control interface.
     475
     476Deliverables: develop initial version of component interface software
     477"matching GENI framework" and demonstrate on SPP nodes in WUSTL lab.
     478
     479''Aaron Falk'' How are resources partitioned?
     480
     481On a node you run in a vserver. If you want fastpath on the network
     482processor, you request the resources you want (bandwidth, ports,
     483queues, ...)  Important to understand is that a reservation isn't just
     484"can I get this now" but also "can I get this tomorrow from 0200 to
     4850900?" Although we don't currently have a way to retract resources
     486once given. Every node has a standalone reservation system.
     487
     488''Peter O'Neil'' Wasn't there an issue six months ago about VLAN
     489translation?
     490
     491Will come back to that. Original deliverable was to deploy two SPP
     492nodes, we're going to make it three. Plan is to make this initial
     493deployment available by end of Spiral 1, so it should be available to
     494external researchers in the quarter after that.  Architectre and
     495design doc to GPO by month 9, SPP component manager interface
     496documentation to GPO by month 12. Don't expect to have it all
     497implemented by the end of spiral 1, probably not until the end of
     498spiral 2.
     499
     500Current version of the reservation system doesn't allow you to reserve
     501external port numbers, but don't have a nice way to go to a vserver
     502running in a GPE and force it to close a socket.
     503
     504''Larry Peterson'' Vserver reboot works...
     505
     506''Chip Elliot'' So you need to know the code that will run on the
     507processor to make the reservation.
     508
     509Flow monitoring, so you can tell which experiment is sending out
     510packets to a host on the internet. Version 2 of Network Processor
     511Datapath Software pushed back, toward the end of the year.
     512
     513Y2: Carryover from Y1, Continue dev of component interface sw
     514(integrate fast path and inter-node link bandwith reservation into
     515GENI management framework, use RSpecs for making multi-node
     516reservations), documentation, tutorials, sample applications using SPP
     517fast paths, support.
     518
     519If time permits: implement control software for NetFPGAs, terminate
     520VLANs to MAX, GpENI, Stanford (?). Physical connection -- can we do it
     521in time? What do VLANs connect to? Static or dynamic?
     522
     523SPP nodes will not set up VLANs dynamically.
     524
     525More if time permits: Cool demos at GECs (OpenFlow in a slice),
     526demonstrating slice management with GUSH and RAVEN, eliminating IP
     527tunnels where L2 connections exist, transition management to IU group,
     528expand NP capabilities (Netronome 40 cores on low cost PCIx card).
     529
     530Where else might we put SPP nodes other than IPP PoP?
     531
     532''Peter O'Neil'' Maybe get some hosting space, we sometimes act as a
     533RON and provide access to elsewhere. This can be lower cost, but not
     534free. Campuses would be better, most RONs don't have space.
     535
     536Can't get much cheaper than Internet2 (we're paying nothing) although
     537it's effort to work with them.
     538
     539''Heidi Picher Dempsey'' hosting centers are much cheaper than
     540Internet2 space.
     541
     542== GPO Spiral 2 Vision ==
     543
     544Spiral 1 had two goals -- control framework that controls a lot of
     545stuff, and create end-to-end slices across aggregates. Spiral 2 only
     546one big goal -- get continuous big real research projects
     547running. This is where the rubber hits the road; is anyone interested?
     548Can we really make it work?
     549
     550''Chip Elliot'' ''Operations'' becomes a big part of this, getting
     551things up and running and keeping them running. This is a change from
     552what you're used to.
     553
     554''Chip Elliot'' Spiral 2 is an opportunity to see which of the things
     555we've developed are of interest to people. It's easy to build
     556infrastructure that nobody uses; we want to see, of what we've built,
     557what people want to use.
     558
     559''Aaron Falk'' Documentation, sample experiments, tutorials, users
     560workshop. Want to help get researchers using the infrastructure. So
     561here are some candidate ideas of what to include in your SoW for next
     562year, to help us reach Spiral 2 goals.  We're going to be pressing
     563people toward Spiral 2 goals, prioritize based on the GENI goals. You
     564might be interested in doing some work that'll be good for your
     565aggregate or cluster, but priority will be given to tasks that further
     566the Spiral 2 goals.
     567
     568''Chip Elliot'' Instrumentation and measurement: everyone has agreed
     569that this environment will be really well instrumented. But we have
     570very few efforts focused on this in Spiral 1.
     571
     572''Larry Peterson'' I see instrumentation largely as a slice issue.
     573
     574''Aaron Falk'' For many experiments you want the instrumentation to
     575have minimal impact.  Also want some extra-slice instrumentation,
     576e.g. BER on a link that you're running over.
     577
     578''Chip Elliot'' Is it a researcher's job to resynchronize all of the
     579clocks?
     580
     581''Larry Peterson'' There are useful common servivces -- logging,
     582archiving, time synchronization, ... that we should provide. The work
     583of instrumenting is then a slice issue.
     584
     585''Heidi Picher Dempsey'' After you've collected data, you want to be
     586able to share it, too.
     587
     588''Larry Peterson'' !MeasurementLab has three pieces; tools, embargoing
     589data, platform.
     590
     591''Aaron Falk'' Negotiate with your system engineer to work out
     592milestones.
     593
     594''Chip Elliot'' We also want identity management systems that are not
     595tool-specific.  We're advocating that we leverage other people's work
     596for this. Currently recommending Shibboleth and !InCommon for "single
     597sign-on."
     598
     599''Larry Peterson'' Immediate reaction is that it's non-trivial
     600programming effort.
     601
     602''Guido Appenzeller'' From our point of view, we trust the clearinghouse.
     603
     604''Guido Appenzeller'' By centralizing services we simplify things,
     605right? It relieves you of a lot of the identity management work.
     606
     607''Chip Elliot'' I agree with that argument, but there is also the
     608argument that there is benefit in being able to allocate and redeem
     609tickets.
     610
     611''Aaron Falk'' If we go to a centralized trust model, will that
     612preclude outsiders using these resources?
     613
     614''Chip Elliot'' But there will always be pairwise trust as well.
     615
     616''Chip Elliot'' Integration and interoperability. Integration has to
     617continue, it won't all be working by October 1. How many control
     618frameworks will there be when the dust settles? It's a big question
     619for Spiral 2.
     620
     621''James Sterbenz'' Getting integrated with one control framework was
     622hard -- how am I going to interoperate with multiple?
     623
     624''Larry Peterson'' ProtoGENI and !PlanetLab share history in the
     625SFA. Maybe we can bring them together, TIED as well.
     626
     627''Chip Elliot'' My view is that by October 1 we'll have enough
     628experience with these control frameworks to determine what we can
     629do. We may determine that nobody wants to do it. Or maybe we can unify
     630things enough that not every aggregate will have to implement two or
     631three interfaces.