Changes between Version 6 and Version 7 of PlasticSlices/FinalReport


Ignore:
Timestamp:
08/04/11 10:14:19 (13 years ago)
Author:
Josh Smift
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • PlasticSlices/FinalReport

    v6 v7  
    11[[PageOutline]]
    22
    3 This is the final report on the conclusions and results of the [wiki:PlasticSlices Plastic Slices] project.  This project ran ten GENI slices continously in a nationwide infrastructure that included eight campuses over a period of three months, in order to gain experience managing and operating production-quality GENI resources, and to discover and record issues that early experimenters likely would encounter. We conclude that mesoscale GENI is generally ready for operations with more experimenters, that GENI software is mostly ready to use in a production-quality environment, that GENI experiments are somewhat isolated from each other, and that GENI is easy to use initially, but more challenging to use for complex experiments. All of these areas need continuing improvement, and we make suggestions for ways to do that. The report also includes more details about the environment, experiments, baselines, and tools we used, and further discussion of some of the results highlighting ways in which GENI is a unique research environment. More details are available at [http://groups.geni.net/geni/wiki/PlasticSlices].
     3This is the final report on the conclusions and results of the [wiki:PlasticSlices Plastic Slices] project. This project ran ten GENI slices continously over a period of three months, in a nationwide infrastructure that included eight campuses, in order to gain experience managing and operating production-quality GENI resources, and to discover and record issues that early experimenters likely would encounter. We conclude that meso-scale GENI is generally ready for operations with more experimenters, that GENI software is mostly ready to use in a production-quality environment, that GENI experiments are somewhat isolated from each other, and that GENI is easy to use initially but more challenging to use for complex experiments. All of these areas need continuing improvement, and we make suggestions for ways to do that. The report also includes more details about the environment, experiments, baselines, and tools we used, and further discussion of some of the results which highlight ways in which GENI is a unique research environment. More details are available at [http://groups.geni.net/geni/wiki/PlasticSlices].
    44
    55[[ThumbImage(TangoGENI:OF-VLAN3715.jpg, thumb=800)]]
    66
    7 (A diagram of one of the core VLANs in the current mesoscale GENI deployment.)
     7(A diagram of one of the core VLANs in the current meso-scale GENI deployment.)
    88
    99= 1. Motivation =
     
    1111The Plastic Slices project in the GENI meso-scale infrastructure was created to set shared campus goals for evolving GENI's infrastructure and to agree on a schedule and resources to support GENI experiments. This first project was an effort to run ten (or more) GENI slices continuously for multiple months -- not merely for the sake of having ten slices running, but to gain experience managing and operating production-quality GENI resources.
    1212
    13 During Spiral 3, campuses expanded and updated their OpenFlow deployments. All campuses agreed to run a GENI AM API compliant aggregate manager, and to support at least four multi-site GENI experiments by GEC 12 (November, 2011). This laid the foundation for GENI to continuously support and manage multiple simultaneous slices that contain resources from GENI AM API compliant aggregates with multiple cross-country layer 2 data paths. (http://groups.geni.net/geni/wiki/GeniApi has more information about what it means to be a "GENI AM API compliant aggregate.") This campus infrastructure can support the transition from building GENI to using GENI continuously in the meso-scale infrastructure. Longer-term, it can also support the transition to at-scale production use in 2012, as originally proposed by each campus.
     13During Spiral 3, campuses expanded and updated their !OpenFlow deployments. All campuses agreed to run a GENI AM API compliant aggregate manager, and to support at least four multi-site GENI experiments by GEC 12 (November 2011). This laid the foundation for GENI to continuously support and manage multiple simultaneous slices that contain resources from GENI AM API compliant aggregates with multiple cross-country layer 2 data paths. (http://groups.geni.net/geni/wiki/GeniApi has more information about what it means to be a "GENI AM API compliant aggregate.") This campus infrastructure can support the transition from building GENI to using GENI continuously in the meso-scale infrastructure. Longer-term, it can also support the transition to at-scale production use in 2012, as originally proposed by each campus.
    1414
    1515This project investigated technical issues associated with managing multiple slices for long-term experiments, and also tried out early operations procedures for supporting those experiments.
     
    2727Additionally, we wanted to discover and record issues that early experimenters likely would encounter, such as:
    2828
    29  * Operational availability and up-time
     29 * Operational availability and uptime
    3030 * Software-related issues, both with user tools and with aggregate software
    3131 * Experiment isolation, i.e. preventing experiments from interfering with each other
     
    3636= 3. Environment =
    3737
    38 The environment we used for the project began with two engineered VLANs (VLAN 3715 and 3716), which were provisioned through the Internet2 and NLR backbones, through various regional networks, and through to the mesoscale deployments on eight campuses. Each of I2 and NLR provided OpenFlow network resources, and each campuses provided OpenFlow network and MyPLC compute resources where we ran the experiments. The GENI Meta-Operations Center (GMOC) collected monitoring data, and provided OpenFlow support to campuses. We ran ten GENI slices, using different subsets of the eight campuses, and used those slices to run five artificial experiments on two slices each. We used those experiments in a series of eight baselines, with traffic flows that were representative of real GENI experimenters. All of these resources were allocated with the Omni command-line tool via the GENI API, and we then used simplistic command-line tools to manage the slices and experiments. The various operators used draft versions of GENI operational procedures, and communicated (with each other and with experimenters) via GENI mailing lists and chatrooms.
     38The environment we used for the project began with two engineered VLANs (VLAN 3715 and 3716), which were provisioned through the Internet2 and NLR backbones, through various regional networks, and through to the meso-scale deployments on eight campuses. Each of I2 and NLR provided !OpenFlow network resources, and each campus provided !OpenFlow network and MyPLC compute resources where we ran the experiments. The GENI Meta-Operations Center (GMOC) collected monitoring data, and provided !OpenFlow support to campuses. We ran ten GENI slices, using different subsets of the eight campuses, and used those slices to run five artificial experiments on two slices each. We used those experiments in a series of eight baselines, with traffic flows that were representative of real GENI experiments. All of these resources were allocated with the Omni GENI API client, and we then used a variety of common Linux command-line tools to manage the slices and experiments. The various operators used draft versions of GENI operational procedures, and communicated with each other and with experimenters via GENI mailing lists and chatrooms.
    3939
    4040We had very much hoped to include real experiments and/or real experimenters in the project by the end, but we concluded that there weren't any non-artificial experiments ready to run at either of the two interim points when we reviewed this question during the project. This is one of the central things we intend to accomplish in our follow-on work.
     
    5050We found that resource operators need to communicate more than they were, both with each other about their plans and about issues requiring coordination between sites, and with experimenters about outages. This improved over the course of the project, but further improvement is still needed.
    5151
    52 We also found that it was difficult to identify the relationships between the various pieces the make up GENI. Given a resource, it's still hard to determine what sliver it's a part of, and what slice that sliver is a part of, and what user is responsible for that slice. We used naming conventions (e.g. all slices were named "plastic-" plus a number, all OpenFlow slivers started with the hostname of the system running the OpenFlow controller, all the slivers in a slice included the number of the slice they were a part of, etc) to make this easier, but real experimenters are unlikely to be so consistent (and certainly aren't required to be), so this solution won't scale well, especially to dozens or hundreds of experimenters.
     52We also found that it was difficult to identify the relationships between the various pieces the make up GENI. Given a resource, it's still hard to determine what sliver it's a part of, and what slice that sliver is a part of, and what user is responsible for that slice. We used naming conventions to make this easier (e.g. all slices were named "plastic-" plus a number, all !OpenFlow slivers started with the hostname of the system running the !OpenFlow controller, all the slivers in a slice included the number of the slice they were a part of, etc), but real experimenters are unlikely to be so consistent (and certainly aren't required to be), so this solution won't scale well, especially to dozens or hundreds of experimenters.
    5353
    5454We also concluded that resources weren't as available as they should be in a production-quality environment. Aggregate managers were sometimes down unexpectedly, and the software that runs them is still undesirably buggy. (On the plus side, the software developers we worked with were very responsive to our bug reports.) Also, operators sometimes found it difficult to identify which versions of software they were running, or to determine who was running a newer version, as some software is still largely identified by Git hash tags rather than by sequential release numbers.
     
    5656We identified some specific ideas for improvement:
    5757
    58  * We'd like to work with campuses to set and measure up-time targets for the aggregates they support. We understand that there will be variations from campus to campus, potentially fairly wide variations, and that's not fundamentally a problem, as long as the targets are well-documented and publicized, so that experimenters can set their expectations accurately.
     58 * We'd like to work with campuses to set and measure uptime targets for the aggregates they support. We understand that there will be variations from campus to campus, potentially fairly wide variations, and that's not fundamentally a problem, as long as the targets are well-documented and publicized, so that experimenters can set their expectations accurately.
    5959
    6060 * Operators should continue to give feedback and input to software developers on features and priorites.
     
    6666Is meso-scale GENI software ready to use in a production-quality environment? We concluded that it is, but again with some caveats.
    6767
    68 Much of the software we rely on is still new, and at an early stage of its life cycle. This is unavoidable to some extent, and improves with time. Similarly, GENI may be the first large-scale deployment for some software, whether for the software as a whole, or for new versions as they come out. It may be difficult for a developer to test their software adequately in their development environment, which is unlikely to be as large, geographically diverse, and otherwise heterogeneous, as the whole of GENI.
     68Much of the software we rely on is still new, and at an early stage of its life cycle. This is unavoidable to some extent, and improves with time. Similarly, GENI may be the first large-scale deployment for some software, whether for the software as a whole, or for new versions as they come out. It may be difficult for a developer to test their software adequately in their local development environment, which is unlikely to be as large, geographically diverse, and otherwise heterogeneous, as the whole of GENI.
    6969
    7070Some ideas for improvement:
     
    7272 * GENI racks (still on the horizon) will make production environments more similar, giving developers a somewhat smaller target, and reducing some of the difficulty in developing for a large heterogeneous environment.
    7373
    74  * InCNTRE, a Software Defined Networking initiative at Indiana University, will emphasize interoperability and commercial uses of OpenFlow, which should help to make OpenFlow in particular more mature.
     74 * InCNTRE, a Software Defined Networking initiative at Indiana University, will emphasize interoperability and commercial uses of !OpenFlow, which should help to improve maturity for !OpenFlow in particular.
    7575
    7676 * Software developers could use GENI slices and resources to test their software in ways that aren't feasible within their development environments, such as with hardware from multiple vendors, over long-distance network links, etc -- using GENI resources before the software is fully deployed to the entirety of GENI.
     
    9090 * Topology problems with one experiment can affect other experiments, even experiments which have a every simple topology themselves, e.g. by creating a broadcast storm, leaking traffic across VLANs unexpectedly, etc.
    9191
    92  * All of the bandwidth in GENI is shared, with no easy ways to set either hard limits on maximum use, or to request a minimum requirement.
     92 * All of the bandwidth in GENI is shared, with no easy ways for an experiment or slice either to set hard limits on its maximum usage, or to request a minimum requirement for its dedicated use.
    9393
    9494Improving isolation is already an active area of work within GENI, but we had some specific ideas for improvements:
     
    9898 * When that isn't enough, the GPO and GMOC should work to develop better procedures to handle communication between operators, and with experimenters, when there are issues related to isolation.
    9999
    100  * Additional hard limits on resource usage, like QoS in the OpenFlow protocol and in backbone hardware, which are planned for later releases, will help keep experiments more isolated from each other.
     100 * Additional hard limits on resource usage, like QoS in the !OpenFlow protocol and in backbone hardware (which are planned for later releases), will help keep experiments more isolated from each other.
    101101
    102102== 4.4. Ease of use ==
     
    104104Is meso-scale GENI easy for experimenters to use? We found that it is in some ways, but not in others.
    105105
    106 Getting started and doing very simple things is in fact fairly easy: There are few barriers to entry, and they're generally low. However, experimenter tools to enable more sophisticated experiments are only just now interoperating with GENI -- although this is improving rapidly, and in fact improved over the course of the project. There are also specific usability concerns with OpenFlow, where the Expedient Opt-In Manager requires manual approval from an operator at every OpenFlow aggregate, whenever any OpenFlow resource is allocated. This can be daunting to new experimenters, and can cause significant delays for anyone.
     106Getting started and doing very simple things is in fact fairly easy: There are few barriers to entry, and they're generally low. However, experimenter tools to enable more sophisticated experiments are only just now interoperating with GENI -- although this is improving rapidly, and in fact improved over the course of the project. There are also specific usability concerns with !OpenFlow, where the Expedient Opt-In Manager requires separate manual approval from an operator whenever any !OpenFlow sliver is created or changed. This can be daunting to new experimenters, and can cause significant delays for anyone.
    107107
    108108Usability is another area where work is already active in GENI (most of the Experimenter track at GEC 11 focused on tools), but we again have some ideas for additional improvements:
     
    110110 * We'd like to encourage experimenters to try out newly-integrated tools, and actively request bug fixes and features. Experimenter demand is part of a very positive feedback loop, encouraging developers to create better tools, which in turn makes GENI more usable to more experimenters.
    111111
    112  * Similar to the idea of using GENI slices and resources to test aggregate software, tool developers may be able to use GENI resources to test their tools on a larger scale than their own development environments.
    113 
    114  * Work on the general issue of GENI stitching may help improve the OpenFlow opt-in situation.
     112 * Similar to the idea of using GENI slices and resources to test aggregate software, tool developers may be able to use GENI resources to test their tools on a larger scale than their own local development environments.
     113
     114 * Ongoing work on the general issue of stitching in GENI may also help improve the !OpenFlow opt-in situation.
    115115
    116116= 5. Backbone resources =
    117117
    118 The Plastic Slices project used the GENI network core in Internet2 and NLR, which includes two OpenFlow-controlled VLANs (3715 and 3716) on a total of ten switches (five in each of I2 and NLR). The OpenFlow network in each provider was managed by an Expedient OpenFlow aggregate manager.
     118The Plastic Slices project used the GENI network core in Internet2 and NLR, which includes two !OpenFlow-controlled VLANs (3715 and 3716) on a total of ten switches (five in each of I2 and NLR). The !OpenFlow network in each provider was managed by an Expedient !OpenFlow aggregate manager.
    119119
    120120http://groups.geni.net/geni/wiki/NetworkCore has more information about the GENI network core and how to use it.
     
    122122= 6. Campus resources =
    123123
    124 Each campus had a private OpenFlow-controlled VLAN (typically VLAN 1750), which was cross-connected to the two backbone VLANs (which were typically not OpenFlow-controlled on campuses), and managed by an Expedient OpenFlow aggregate manager. Each campus also had a MyPLC aggregate, with two plnodes (or in some cases more), each of which included a dataplane interface connected to VLAN 1750 (and others).
    125 
    126 This network topology is described in more detail at http://groups.geni.net/geni/wiki/NetworkCore, and http://groups.geni.net/geni/wiki/TangoGENI#ParticipatingAggregates has links to the aggregate info page for each aggregate.
     124Each campus had a private !OpenFlow-controlled VLAN (typically VLAN 1750), which was cross-connected to the two backbone VLANs (which were typically not !OpenFlow-controlled on campuses), and managed by an Expedient !OpenFlow aggregate manager. Each campus also had a MyPLC aggregate, with two plnodes (or in some cases more), each of which included a dataplane interface connected to VLAN 1750 (and others).
     125
     126This network topology is described in more detail at http://groups.geni.net/geni/wiki/OpenFlow/CampusTopology, and http://groups.geni.net/geni/wiki/TangoGENI#ParticipatingAggregates has links to the aggregate info page for each aggregate.
    127127
    128128= 7. Monitoring =
     
    130130|| [[ThumbImage(PlasticSlices/BaselineEvaluation/Baseline5Traffic:baseline5-txbytes.png, thumb=500)]] || [[ThumbImage(PlasticSlices/BaselineEvaluation/Baseline5Traffic:baseline5-rxbytes.png, thumb=500)]] ||
    131131
    132 (Graphs of bytes sent (first graph TX) and bytes received (second graph RX) in each slice during Baseline 5.)
    133 
    134 During the project, all mesoscale campuses were configured to send monitoring data to the GMOC. Some sites initially configured resources that didn't use NTP, but revised the configurations after starting monitoring, because NTP is essential for correlating data between sites. The GMOC offers an interface called SNAPP for browsing the data that they collect, visible at http://gmoc-db.grnoc.iu.edu/api-demo/ (Despite the name, this is a production GMOC web interface with a number of options for searching and displaying data). In addition, the GMOC offers an API which anyone can use to download raw GMOC-collected data to analyze, graph, etc.  The GPO used this API to create some useful Plastic Slices monitoring graphs (samples included in this report, and more available through the GENI wiki). The GPO data is of interest to both operators and experimenters, covers various levels of granularity, and presents some per-slice information.  The per-slice information relies on naming conventions to tie together slices and slivers in this implementation.
     132(Graphs of bytes sent (first graph) and bytes received (second graph) in each slice during Baseline 5.)
     133
     134During the project, all meso-scale campuses were configured to send monitoring data to the GMOC. Some sites initially configured resources that didn't use NTP, but revised the configurations after starting monitoring, because NTP is essential for correlating data between sites. The GMOC offers an interface called SNAPP for browsing the data that they collect, visible at http://gmoc-db.grnoc.iu.edu/api-demo/ (Despite the word "demo" in the URL, this is a production GMOC web interface with a number of options for searching and displaying data). In addition, the GMOC offers an API which anyone can use to download raw GMOC-collected data to analyze, graph, etc. The GPO used this API to create some useful Plastic Slices monitoring graphs (such as the graph above, for example; more are available through the GENI wiki). The GPO data is of interest to both operators and experimenters, covers various levels of granularity, and presents some per-slice information (although the per-slice information relies on naming conventions to tie together slices and slivers in the current implementation).
    135135
    136136http://groups.geni.net/geni/wiki/PlasticSlices/MonitoringRecommendations has links to a variety of monitoring sites and information.
     
    138138= 8. Slices =
    139139
    140 We created ten slices, named plastic-101 through plastic-110, to make them easy to identify, and to make it easy to identify the slivers within them. Each included a sliver on the MyPLC plnodes at each campuses, and an OpenFlow sliver controlling an IP subnet throughout the network (10.42.X.0/24), each of which was controlled by a simple OpenFlow controller (the NOX 'switch') module. For simplicity's sake, we used VLAN 3715 for the odd-numbered slices and VLAN 3716 for the even-numbered ones. The slices included various subsets of the eight campuses: Two that included all eight sites, two at the endpoints of the core VLANs, etc.
    141 
    142 http://groups.geni.net/geni/wiki/PlasticSlices/SliceStatus has a table of which sites were in which slice during each baseline.
     140We created ten slices, named plastic-101 through plastic-110, to make them easy to identify, and to make it easy to identify the slivers within them. Each included a sliver on the MyPLC plnodes at each campus; and an !OpenFlow sliver at each campus and in I2 and NLR, with an IP subnet throughout the network (10.42.X.0/24), controlled by a simple !OpenFlow controller (the NOX 'switch') module. For simplicity's sake, we used VLAN 3715 for the odd-numbered slices and VLAN 3716 for the even-numbered ones. The slices included various subsets of the eight campuses: Two that included all eight sites, two at the endpoints of the core VLANs, etc.
     141
     142http://groups.geni.net/geni/wiki/PlasticSlices/SliceStatus has a table of which sites' resources were included in which slice during each baseline.
    143143
    144144= 9. Experiments =
     
    150150= 10. Baselines =
    151151
    152 We ran a series of eight baselines using these slices and experiments. We first confirmed the basic functionality and stability of the environment, by sending 1 GB of data across each slice, then repeating that once a day for three days, and then repeating that once a day for six days. We then began sending continuous traffic, 24 hours a day; first 1 Mbit/sec for a full day, then 10 Mb/s for a day, and then 10 Mb/s for six days. The final two baselines tested GENI procedures at a larger scale: We performed an Emergency Stop test with BBN, and tried creating many slices very quickly (one per second, for 10, 100, and 1000 seconds).
     152We ran a series of eight baselines using these slices and experiments. We first confirmed the basic functionality and stability of the environment, by sending 1 GB of data across each slice, then repeating that once a day for three days, and then repeating that once a day for six days. We then began sending continuous traffic, 24 hours a day: First 1 Mbit/sec for a full day, then 10 Mb/s for a day, and then 10 Mb/s for six days. The final two baselines tested GENI procedures at a larger scale: We performed an Emergency Stop test with BBN, and tried creating many slices very quickly to simulate user load (one per second, for 10, 100, and 1000 seconds).
    153153
    154154http://groups.geni.net/geni/wiki/PlasticSlices/BaselineEvaluation has a summary of the baselines, with links to pages with more details, which themselves have links to full logs.
     
    163163 * A directory of common user configuration files (dotfiles).
    164164
    165 We briefly investigated experimenter tools such as Gush and Raven, but at the time the project began, neither seemed sufficiently well-integrated with GENI to be easily used.  We expect to revisit experimenter tools in later projects.
     165We briefly investigated experimenter tools such as Gush and Raven, but at the time the project began, neither seemed sufficiently well-integrated with GENI to be easily used. We expect to revisit experimenter tools in later projects.
    166166
    167167http://groups.geni.net/geni/wiki/PlasticSlices/Tools has many more details about the tools we used and how we used them, all the configuration files, etc.
     
    173173We also found some results that would have been surprising on the regular Internet (or a similar traditional IP network), but which in fact demonstrate precisely the ways in which GENI is different from regular networks, often in advantageous ways.
    174174
    175 == 12.1. Packet loss and OpenFlow ==
    176 
    177 One of the results that would've been surprising on a regular network is packet loss, e.g. the 8% loss from BBN to Clemson with UDP in a 40-second test. This turns out to be related to our simplistic use of OpenFlow: As the first packet hits each OF switch in the path to the destination, across the entire country, each has to connect back to the slice's OF controller in Boston for instructions. This can take a few seconds to complete, but once the controller has installed rules in the switch's flowtable, subsequent packets flow at line speed, as expeced. Thus, packet loss statistics like this typically reflect "the first 8% of packets failed", not "out of every hundred packets, eight of them failed".
    178 
    179 The logs from the client and server make this clear. On the server, all you see is the overall packet loss:
     175== 12.1. Packet loss and !OpenFlow ==
     176
     177One of the results that would've been surprising on a regular network is packet loss, e.g. nearly 8% loss from BBN to Clemson with UDP in a 40-second test. This turns out to be related to our simplistic use of !OpenFlow: As the first packet hits each OF switch in the path to the destination, across the entire country, each has to connect back to the slice's OF controller in Boston for instructions. This can take a few seconds to complete, but once the controller has installed rules in the switch's flowtable, subsequent packets flow at line speed, as expeced. Thus, packet loss statistics like this typically reflect "the first 8% of packets failed", not "out of every hundred packets, eight of them failed".
     178
     179This becomes clear when the logs from the client and server are compared. On the client (summarizing what the server said it saw), all you see is the overall packet loss:
    180180
    181181{{{
     
    185185}}}
    186186
    187 On the client, however, you see 76% loss in the first second, and loss is minimal after that:
     187On the server (in the detailed second-by-second logs), however, you see 76% loss in the first second, and loss is minimal after that:
    188188
    189189{{{
     
    202202}}}
    203203
    204 Packet loss is generally not desirable, but it highlights the fact that OpenFlow allows you to control traffic in GENI in ways that aren't possible in a regular network. Using OpenFlow doesn't require packet loss, of course: For example, we could have used a smarter (experiment-specific) controller that added flowtable rules to the switches before we even began sending traffic. Or, if we didn't want to use a more complicated controller for other reasons, we could have sent some seed traffic to cause the simplistic controller to create the flows, before we began sending the traffic that we actually measured.  OpenFlow in GENI gives you a great deal of flexibility.
     204Packet loss is generally not desirable, but it highlights the fact that !OpenFlow allows you to control traffic in GENI in ways that aren't possible in a regular network. Using !OpenFlow doesn't require packet loss, of course: For example, we could have used a smarter (experiment-specific) controller that added flowtable rules to the switches before we even began sending traffic. Or, if we didn't want to use a more complicated controller for other reasons, we could have sent some seed traffic to cause the simplistic controller to create the flows, before we began sending the traffic that we actually measured. !OpenFlow in GENI gives experimenters a great deal of flexibility.
    205205
    206206== 12.2. Latency and topology ==
    207207
    208 Another result that would've been surprising on a regular network is low throughput, e.g. between two geographical nearby sites like BBN (in Boston) and Rutgers (in New Jersey). This turns out to be due to a very valuable feature of GENI: The ability to create and use different topologies in the core network. Not all GENI network paths are optimized for distance -- deliberately so, since some experiments specifically want long links with high latency. One of the paths available during this project took nearly ten thousand geographical miles to get from BBN to Rutgers, for example.
    209 
    210 The RTT results from a ping test between BBN and Rutgers, using each of four different paths, shows this clearly. BBN is a useful test case for this because we connect to both NLR and Internet2, thus giving us four possible paths to each other campus (two VLANs through each of the two providers).
     208Another result that would've been surprising on a regular network is low throughput, e.g. between two geographically nearby sites like BBN (in Boston) and Rutgers (in New Jersey). This turns out to be due to a very valuable feature of GENI: The ability to create and use different topologies in the core network. Not all GENI network paths are optimized for distance -- deliberately so, since some experiments specifically want long links with high latency. One of the paths available during this project took nearly ten thousand geographical miles to get from BBN to Rutgers, for example.
     209
     210The RTT results from a ping test between BBN and Rutgers, using each of four different paths, show this clearly. BBN is a useful test case for this because we connect to both NLR and Internet2, thus giving us four possible paths to each other campus (two VLANs through each of the two providers).
    211211
    212212When you connect via BBN's connection to NLR, on VLAN 3715, the traffic path is Boston - Chicago - Atlanta - DC - NJ, and the ping RTT is 74.3 ms:
     
    250250= 13. Future work =
    251251
    252 This report concludes the formal part of the Plastic Slices project, but we plan to continue using the meso-scale infrastructure to run experiments and tests. We'll publish additional plans and results on the GENI wiki. We intend to keep data flowing continuously for the next few months, to allow us to continue to develop and test monitoring, operational procedures and practices and to integrate new software and hardware.  We intend to involve actual experimenters in future work, and to investigate some of the initial Plastic Slices results in more detail.
     252This report concludes the formal part of the Plastic Slices project, but we plan to continue using the meso-scale infrastructure to run experiments and tests. We'll publish additional plans and results on the GENI wiki, but in general terms, we intend to keep data flowing continuously for the next few months, to allow us to continue to develop and test monitoring, operational procedures and practices and to integrate new software and hardware. We also expect to involve actual experimenters in future work, and to investigate some of the initial Plastic Slices results in more detail.
    253253
    254254Specific goals include:
     
    267267= 14. Thanks =
    268268
    269 This project would have gone nowhere without countless hours of work and support from all the participants: The campuses (Clemson, Georgia Tech, Indiana, Rutgers, Stanford, Washington, and Wisconsin), their regional networks (NoX, SoX, Indiana GigaPoP, MAGPI, CENIC, PNWGP, and WiscNet), the core backbones (Internet2 and NLR), the monitoring work done by the GMOC and GPO staff, and the software developers at Stanford (OpenFlow), Princeton (MyPLC), Utah (ProtoGENI), the GMOC, and the GPO. Our deep and heartfelt thanks to all of them.
     269This project would have gone nowhere without countless hours of work and support from all the participants: The campuses (Clemson, Georgia Tech, Indiana, Rutgers, Stanford, Washington, and Wisconsin), their regional networks (NoX, SoX, Indiana GigaPoP, MAGPI, CENIC, PNWGP, and !WiscNet), the core backbones (Internet2 and NLR), the monitoring work done by the GMOC and GPO staff, and the software developers at Stanford (!OpenFlow), Princeton (MyPLC), Utah (ProtoGENI), the GMOC, and the GPO. Our deep and heartfelt thanks to all of them.