Changes between Initial Version and Version 1 of GENIRacksHome/AcceptanceTests/OpenGENIAcceptanceTestsPlan


Ignore:
Timestamp:
05/23/14 12:56:48 (10 years ago)
Author:
lnevers@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIRacksHome/AcceptanceTests/OpenGENIAcceptanceTestsPlan

    v1 v1  
     1[[PageOutline(1-3)]]
     2
     3= GENI Rack Aggregate Manager Acceptance Test Plan =
     4
     5This page captures the GENI Racks Acceptance Test Plan to be executed for the BBN GENI Rack Aggregate Manager (GRAM). This test plan is based on the [http://groups.geni.net/geni/wiki/GeniRacks GENI Racks Requirements] and it outlines all features that are normally validated for GENI Racks. The goal of this effort is to capture the state of the current features and to generate a list of missing feature that are required to meet all GENI Racks requirements.
     6
     7The BBN GRAM Acceptance Test effort will generate the following:
     8 - a [wiki:GENIRacksHome/AcceptanceTests/GRAMAcceptanceTestsPlan GRAM Acceptance Test Plan], this document written on 05/16/2013.
     9 - tickets to track  GRAM issues discovered in testing  - are internal only
     10 - an '''[wiki:GENIRacksHome/GRAMRacks/AcceptanceTestStatusApr2014  Acceptance Test Status - April 2014]''' page where test status and logs are found.
     11 - an intermediate '''[wiki:GENIRacksHome/GRAMRacks/AcceptanceTestReportApr2014 Acceptance Test Report - April 2014]''' page to summarize state.
     12
     13== Assumptions and Dependencies ==
     14
     15The following assumptions are made for all tests described in this plan:
     16
     17 * GRAM Clearing house credentials will be used for all tests.
     18 * GRAM is the slice authority for all tests in this plan.
     19 * No tests using the GENI Clearinghouse and GPO ProtoGENI credentials are planned for the initial tests.
     20 * Resources for each test will be requested from the GRAM Aggregate Manager.
     21 * Compute resources are VMs unless otherwise stated, there are no dedicated devices available for this initial evaluation.
     22 * All Aggregate Manager requests are made via the Omni command line tool which uses the GENI AM API.
     23 * In all scenarios, one experiment is always equal to one slice.
     24 * Currently there is only one GRAM rack, which has the following test implications. All scenarios that are meant to be run on 1 rack will be run within 1 VM.  All scenarios that are meant to have multiple racks, will use VMs on multiple servers.
     25 * GRAM will be used as the interface to the rack !OpenFlow resources in the !OpenFlow test cases.
     26
     27It is expected that the GRAM Aggregate Manager will provide an interface into the (VLAN-based Multiplexed !OpenFlow Controller) VMOC aggregate. If the GRAM interface to VMOC is not available, tests will be executed by submitting requests directly to VMOC.
     28
     29If the GRAM solution does not provide support for experimenters uploading custom VM images to the rack, any test case using custom images will modified to use available images for the rack. The ability to upload a custom VM image to the GRAM rack will be tested when it becomes available. 
     30
     31 
     32Test Traffic Profile:
     33
     34 * Experiment traffic includes UDP and TCP data streams at low rates to ensure end-to-end delivery
     35 * Traffic exchange is used to verify that the appropriate data paths are used and that traffic is delivered successfully for each test described.
     36 * Performance measurements are not a goal of these acceptance tests, but some samples will be collected with iperf to characterize the default performance, in some scenario described in this plan.
     37
     38
     39= Acceptance Tests Descriptions =
     40
     41This section describes each acceptance test by defining its goals, topology, and outline test procedure.  Test cases are listed by priority in sections below.  The cases that verify the largest number of requirement criteria are typically listed at a higher priority. The prerequisite tests are usually executed first to verify that baseline monitoring and administrative functions are available. This allows the execution of the experimenter test cases. Additional monitoring and administrative tests described in later sections that are also run before the completion of the acceptance test effort. 
     42
     43For the GRAM Acceptance Test evaluation some of these administrative and monitoring features may not be available but the tests are still planned in order to capture the availability of expected features.
     44
     45== Administration Prerequisite Tests ==
     46
     47Administrative Acceptance tests verify support of administrative management tasks and focus on verifying priority functions for each of the rack components. The set of administrative features described in this section are verified initially. Additional administrative tests are described in a later section and are executed before the acceptance test completion.
     48
     49
     50=== GR-ADM-1: Rack Receipt and Inventory Test ===
     51
     52This acceptance test uses BBN as an example site because it requires physical access to the rack.  The goal of this test is to verify that administrators can integrate the rack into a standard local procedures for systems hosted by the site.
     53
     54==== Procedure ====
     55
     56Outline:
     57
     58 * Power and wire the BBN rack
     59 * Administrator configures the gramm.gpolab.bbn.com DNS namespace and 192.1.242.128/25 IP space, and enters all public IP addresses used by the rack into DNS.
     60 * Administrator requests and receives administrator accounts on the rack and receive read access to all GRAM monitoring of the rack.
     61 * Administrator inventories the physical rack contents, network connections and VLAN configuration, and power connectivity, using standard operational inventories.
     62 * Administrator, GRAM team, and GMOC share information about contact information and change control procedures, and GRAM operators subscribe to GENI operations mailing lists and submit their contact information to GMOC.
     63 * Administrator reviews the documented parts list, power requirements, physical and logical network connectivity requirements, and site administrator community requirements, verifying that these documents should be sufficient for a new site to use when setting up a rack.
     64
     65
     66=== GR-ADM-2: Rack Administrator Access Test ===
     67
     68This test verifies local and remote administrative access to rack devices.
     69
     70==== Procedure ====
     71
     72Outline:
     73
     74 1. For each type of rack infrastructure node, including VM server hosts and any VMs running infrastructure support services, use a site administrator account to test:
     75   * Login to the node using public-key SSH.
     76   * Verify that you cannot login to the node using password-based SSH, nor via any unencrypted login protocol.
     77   * When logged in, run a command via sudo to verify root privileges.
     78 2. For each rack infrastructure device (switches, remote PDUs if any), use a site administrator account to test:
     79   * Login via SSH.
     80   * Login via a serial console (if the device has one).
     81   * Verify that you cannot login to the device via an unencrypted login protocol.
     82   * Use the "enable" command or equivalent to verify privileged access.
     83 3. Verify that GRAM remote console solution for rack hosts can be used to access the consoles all server hosts and experimental hosts:
     84   * Login via SSH or other encrypted protocol.
     85   * Verify that you cannot login via an unencrypted login protocol.
     86
     87
     88== Monitoring Rack Inspection Prerequisite Tests ==
     89
     90These tests verify the availability of information needed to determine rack state, and needed to debug problems during experimental testing. Also verified is the ability to determine the rack components' test-readiness.  Additional monitoring tests are defined in a later section to complete the validation in this section.
     91
     92=== GR-MON-1: Control Network Software and VLAN Inspection Test ===
     93
     94This test inspects the state of the rack control network, infrastructure nodes, and system software.
     95
     96==== Procedure ====
     97
     98 * A site administrator enumerates processes on each of the server host, the Control VM, the VMOC VM, etc. and an experimental node configured for !OpenStack, which listen for network connections from other nodes, identifies what version of what software package is in use for each, and verifies that we know the source of each piece of software and could get access to its source code.
     99 * A site administrator reviews the configuration of the rack control plane switch and verifies that each experimental node's control and console access interfaces are on the expected VLANs.
     100 * A site administrator reviews the MAC address table on the control plane switch, and verifies that all entries are identifiable and expected.
     101
     102=== GR-MON-2: GENI Software Configuration Inspection Test ===
     103
     104This test inspects the state of the GENI AM software in use on the rack.
     105
     106==== Procedure ====
     107
     108 * A site administrator uses available system data sources (process listings, monitoring output, system logs, etc) and/or AM administrative interfaces to determine the configuration of GRAM resources:
     109   * How many experimental nodes are available for bare metal use, how many are configured as !OpenStack containers, and how many are configured as !PlanetLab containers.
     110   * What operating system each !OpenStack container makes available for experimental VMs.
     111   * How many unbound VLANs are in the rack's available pool.
     112   * Whether the GRAM and !OpenFlow AMs trust the pgeni.gpolab.bbn.com slice authority, which will be used for testing. Note, that the pgeni.gpolab.bbn.com slice authority is not used in this test, a local slice authority is used for the initial evaluation.
     113 * A site administrator uses available system data sources to determine the configuration of !OpenFlow resources according to VMOC and GRAM.
     114
     115=== GR-MON-3: GENI Active Experiment Inspection Test ===
     116
     117This test inspects the state of the rack data plane and control networks when experiments are running, and verifies that a site administrator can find information about running experiments.
     118
     119==== Procedure ====
     120
     121 * An experimenter starts up experiments to ensure there is data to look at:
     122   * An experimenter runs an experiment containing at least one rack !OpenStack VM, and terminates it.
     123   * An experimenter runs an experiment containing at least one rack !OpenStack VM, and leaves it running.
     124 * A site administrator uses available system and experiment data sources to determine current experimental state, including:
     125   * How many VMs are running and which experimenters own them
     126   * How many physical hosts are in use by experiments, and which experimenters own them
     127   * How many VMs were terminated within the past day, and which experimenters owned them
     128   * What !OpenFlow controllers the data plane switch and the rack VMOC are communicating with
     129 * A site administrator examines the switches and other rack data sources, and determines:
     130   * What MAC addresses are currently visible on the data plane switch and what experiments do they belong to?
     131   * For some experiment which was terminated within the past day, what data plane and control MAC and IP addresses did the experiment use?
     132   * For some experimental data path which is actively sending traffic on the data plane switch, do changes in interface counters show approximately the expected amount of traffic into and out of the switch?
     133
     134
     135== Experimenter Acceptance Tests ==
     136
     137For the GRAM Acceptance Test evaluation some of these topologies normally validated in GENI Racks are not possible. This effort has only one rack available.  Each test case is described as originally intended, but additionally there are details to show how the test case is modified for the initial GRAM evaluation.  Topologies not be available but the tests are still planned as intended in order to capture the availability of expected features.
     138
     139=== GR-EXP-1: Bare Metal Support Acceptance Test ===
     140
     141Bare metal nodes are exclusive dedicated physical nodes that are used throughout the experimenter test cases.  This section outlines features to be verified which are not explicitly validated in other scenarios:
     142
     143 1. Determine which nodes can be used as exclusive nodes.
     144 2. Obtain 2 licensed recent Microsoft OS images for physical nodes from the site (BBN).
     145 3. Reserve and boot 2 physical nodes using Microsoft image.
     146 4. Obtain a recent Linux OS image for physical nodes from the GRAM list.
     147 5. Reserve and boot a physical node using this Linux OS image.
     148 6. Release physical node resource.
     149 7. Modify Aggregate resource allocation for the rack to add 1 additional physical node.
     150
     151
     152''Evaluation Note:'' 1) No Bare Metal Nodes are available. 2) There is MS Windows custom Linux support for the admin.  2)There is no custom image support for experimenters. 4) There is no way to modify a resource from VM to Bare Metal. 
     153
     154
     155
     156=== GR-EXP-2: GRAM Single Site Acceptance Test ===
     157
     158This one site test is run on the BBN GRAM rack and it includes two experiments. Each experiment requests local compute resources, which generate bidirectional traffic over a Layer 2 data plane network connection. The goals of this test are to verify basic operations of VMs and data flows within one rack; verify the ability to request a publically routable IP address and public TCP/UDP port mapping for a control interface on a compute resource; and verify the ability to add a customized image for the rack.
     159
     160
     161==== Test Topology ====
     162
     163This test uses this topology:
     164
     165[[Image(GRAMSingleSiteAcceptanceTest.jpg)]]
     166
     167''Note:''  The diagram shows the logical end-points for each experiment traffic exchange. The VMs may or may not be on different experiment nodes.
     168
     169For the initial evaluation there are no bare metal nodes, so the test case is modified to have only VMs. Here is the actual topology run:
     170
     171[[Image(GRAMSingleSiteAcceptanceTest-actual.jpg)]]
     172
     173''Evaluation Note:'' Test case is described for the original test case, actual procedure will be captures as part of test details available from the [https://superior.bbn.com/trac/bbn-rack/wiki/AcceptanceTestStatus Acceptance Test Status] page.
     174
     175==== Prerequisites ====
     176
     177This test has these prerequisites:
     178
     179 * GRAM makes available at least two Linux distributions and a FreeBSD image. If the not available, test will be run with available images.
     180 * Two GPO customized Ubuntu image snapshots are available and have been manually uploaded by the rack administrator using available GRAM documentation. One Ubuntu image is for the VM and one Ubuntu image is for the physical node in this test. Physical node are not available, so VM will be used.
     181 * Traffic generation tools may be part of image or may be installed at experiment runtime.
     182 * Administrative accounts have been created for GPO staff on the BBN GRAM rack.
     183 * GENI Experimenter1 and Experimenter2 accounts exist at the GPO PG Clearinghouse.
     184 * If available, use baseline Monitoring to ensure that any problems are quickly identified.
     185
     186
     187==== Procedure ====
     188
     189Do the following:
     190
     191 1. As Experimenter1, request !ListResources from BBN GRAM.
     192 2. Review advertisement RSpec for a list of OS images which can be loaded, and identify available resources.
     193 3. Verify that the GPO Ubuntu customized image is available in the advertisement RSpec.
     194 4. Define a request RSpec for two VMs, each with a GPO Ubuntu image. Request a publically routable IP address and public TCP/UDP port mapping for the control interface on each node.
     195 5. Create the first slice.
     196 6. Create a sliver in the first slice, using the RSpec defined in step 4.
     197 7. Log in to each of the systems, and send traffic to the other system sharing a VLAN.
     198 8. Using root privileges on one of the VMs load a Kernel module. If not supported on !OpenStack nodes, testing will proceed past this step.
     199 9. Run a netcat listener and bind to port XYZ on each of the VMs in the BBN GRAM rack.
     200 10. Send traffic to port XYZ on each of the VMs in the GRAM rack over the control network from any commodity Internet host.
     201 11. As Experimenter2, request !ListResources from Site2 GRAM.
     202 12. Define a request RSpec for two physical nodes, both using the uploaded GPO Ubuntu images. If not available, VMs and other images will be used.
     203 13. Create the second slice.
     204 14. Create a sliver in the second slice, using the RSpec defined in step 12.
     205 15. Log in to each of the systems, and send traffic to the other system.
     206 16. Verify that experimenters 1 and 2 cannot use the control plane to access each other's resources (e.g. via unauthenticated SSH, shared writable file system mount)
     207 17. Review system statistics and VM isolation and network isolation on data plane.
     208 18. Verify that each VM has a distinct MAC address for that interface.
     209 19. Verify that VMs' MAC addresses are learned on the data plane switch.
     210 20. Stop traffic and delete slivers.
     211
     212=== GR-EXP-3: GRAM Single Site 100 VM Test ===
     213
     214This one site test runs on the BBN GRAM rack and includes various scenarios to validate compute resource requirements for VMs. The goal of this test is not to validate the GRAM limits, but simply to verify that the GRAM rack can provide 100 VMs with its experiment nodes under various scenarios, including:
     215
     216 * Scenario 5: 100 Slices with 1 VM each
     217 * Scenario 4: 50 Slices with 2 VMs each
     218 * Scenario 3: 4 Slices with 25 VMS each
     219 * Scenario 2: 2 Slices with 50 VMs each
     220 * Scenario 1: 1 Slice with 100 VMs
     221
     222Scenarios will be executed in the order above. It is expected that it will not be possible to have one slice cannot support 100 VMs, tests will be run to determine the maximum number of VMs allowed in one slice and in multiple slices.
     223
     224==== Test Topology ====
     225
     226This test uses this topology:
     227
     228[[Image(GRAMSingleSiteLimitsTest.jpg)]]
     229
     230==== Prerequisites ====
     231This test has these prerequisites:
     232
     233 * Traffic generation tools may be part of image or installed at experiment runtime.
     234 * Administrative accounts exist for GPO staff on the BBN GRAM rack.
     235 * GENI Experimenter1 account exists at GPO PG Clearinghouse.
     236 * If available, baseline Monitoring is used to ensure that any problems are quickly identified.
     237
     238
     239==== Procedure ====
     240Do the following:
     241
     242 1. As Experimenter1, request !ListResources from BBN GRAM.
     243 2. Review !ListResources output, and identify available resources.
     244 3. Write the Scenario 1 RSpec that requests 100 VMs evenly distributed across the experiment nodes using the default image.
     245 4. Create a slice.
     246 5. Create a sliver in the slice, using the RSpec defined in step 3.
     247 6. Log into several of the VMs, and send traffic to several other systems.
     248 7. Step up traffic rates to verify VMs continue to operate with realistic traffic loads.
     249 8. Review system statistics and VM isolation (does not include network isolation)
     250 9. Verify that several VMs running on the same experiment node have a distinct MAC address for their interface.
     251 10. Verify for several VMs running on the same experiment node, that their MAC addresses are learned on the data plane switch.
     252 11. Review monitoring statistics and check for resource status for CPU, disk, memory utilization, interface counters, uptime, process counts, and active user counts.
     253 12. Stop traffic and delete sliver.
     254 13. Re-execute the procedure described in steps 1-12 with changes required for Scenario 2 (2 Slices with 50 VMs each).
     255 14. Re-execute the procedure described in steps 1-12 with changes required for Scenario 3 (4 Slices with 25 VMS each).
     256 15. Re-execute the procedure described in steps 1-12 with changes required for Scenario 4 (50 Slices with 2 VMs each).
     257 16. Re-execute the procedure described in steps 1-12 with changes required for Scenario 5 (100 Slices with 1 VM each).
     258
     259
     260=== GR-EXP-4: GRAM Multi-site Acceptance Test ===
     261
     262This test normally includes two sites and two experiments. Only one rack is available, so the test case will be modified to run within one rack, but VMs will be requested on separate servers.  Each of the compute resources will exchange traffic.  In addition, the VMsin Experiment2 will use multiple data interfaces. Normally, all site-to-site experiments  take place over a wide-area Layer 2 data plane network connection via Internet2 or NLR using VLANs allocated by the AM, that is not the case for the initial test evaluation, all connections will be within the rack.  The goal of this test is to verify basic operations of VMs and data flows between rack resources.
     263
     264==== Test Topology ====
     265
     266This test uses this topology:
     267
     268[[Image(GRAMMultiSiteAcceptanceTest.jpg)]]
     269
     270
     271For the initial evaluation there is only one rack, so the test case is modified to have VMs on different servers rather than different racks. Here is the actual topology run:
     272
     273[[Image(GRAMMultiSiteAcceptanceTest-actual.jpg)]]
     274
     275''Evaluation Note:'' Test case is described for the original test case, actual procedure will be captures as part of test details available from the [https://superior.bbn.com/trac/bbn-rack/wiki/AcceptanceTestStatus Acceptance Test Status] page.
     276
     277
     278==== Prerequisites ====
     279
     280This test has these prerequisites:
     281
     282 * If available, BBN GRAM connectivity statistics will be monitored.
     283 * Administrative accounts have been created for GPO staff at the BBN GRAM rack.
     284 * The VLANs used will be allocated by the rack AM.
     285 * If available, baseline Monitoring is used to ensure that any problems are quickly identified.
     286 * GRAM manages private address allocation for the endpoints in this test.
     287 * Normal network aggregate requirement for the availability of the ION AM do not apply to current evaluation.
     288
     289==== Procedure ====
     290
     291Do the following:
     292
     293 1. As Experimenter1, Request !ListResources from BBN GRAM.
     294 2. Request !ListResources for second GRAM AM (does not exist, thus skipping step).
     295 3. Review !ListResources output from both AMs. (only one am used in this initial evaluation).
     296 4. Define a request RSpec for VMs at BBN GRAM to be on separate VM servers.
     297 5. Define a request RSpec for a VM at remote GRAM for an unbound exclusive non-!OpenFlow VLAN to connect the 2 endpoints. 
     298 6. Create the first slice.
     299 7. Create a sliver at each GRAM aggregate using the RSpecs defined above.
     300 8. Log in to each of the systems, and send traffic to the other system, leave traffic running.
     301 9. As Experimenter2, Request !ListResources from BBN GRAM, (skipping second remote GRAM)
     302 10. Define an request RSpec for one VM and one bare metal node in the BBN GRAM rack. Each resource should have two logical interfaces and a 3rd VLAN for the local connection.
     303 11. Define a request RSpec to add two VMs at Site2 and two VLANs to connect the BBN GRAM to the Site2 GRAM. (Modified for one aggregate)
     304 12. Create a second slice.
     305 13. In the second slice, create a sliver at each GRAM aggregate using the RSpecs defined above. (Modified for one aggregate)
     306 14. Log in to each of the end-point systems, and send traffic to the other end-point system which shares the same VLAN.
     307 15. Verify traffic handling per experiment, VM isolation, and MAC address assignment.
     308 16. Construct and send a non-IP ethernet packet over the data plane interface. (pingplus tool will be used).
     309 17. Review baseline monitoring statistics.
     310 18. Run test for at least 1 hours.
     311 19. Review baseline monitoring statistics.
     312 20. Stop traffic and delete slivers.
     313
     314
     315
     316=== GR-EXP-5: GRAM Network Resources Acceptance Test ===
     317
     318A three site experiment where the only GRAM resources used are !OpenFlow network resources. All compute resources are outside the GRAM rack. The experiment will use the GRAM Aggregate Manager to request the rack data plane resources. The GRAM AM configures the GRAM site !OpenFlow switch. The goal of this test is to verify !OpenFlow operations and integration with meso-scale compute resources and other compute resources external to the GRAM rack.
     319
     320==== Test Topology ====
     321
     322[[Image(GRAMOFNetworkResourceAcceptanceTest.jpg)]]
     323
     324''Note:'' The NLR and Internet2 !OpenFlow VLANs are the [wiki:NetworkCore GENI Network Core] static VLANs.
     325
     326For the initial evaluation there is only one rack, so the test case is modified to have VMs on different servers rather than different racks. Here is the actual topology run:
     327
     328[[Image(GRAMOFNetworkResourceAcceptanceTest-actual.jpg)]]
     329
     330
     331==== Prerequisites ====
     332
     333 - A GPO site network is connected to the GRAM !OpenFlow switch. 
     334 - GRAM VMOC is running and can manage the GRAM !OpenFlow switch
     335 - An !OpenFlow controller is run by the experimenter and is accessible via DNS hostname (or IP address) and TCP port.
     336 - Two meso-scale remote sites make compute resources and !OpenFlow meso-scale resources available for this test.
     337 - GMOC data collection for the meso-scale and GRAM rack resources is functioning for the !OpenFlow and traffic measurements required in this test.
     338
     339''Evaluation Note:'' GMOC data collection is not available for the initial evaluation.  Remote meso-scale sites are not possible for the initial evaluation and will be replaced by local rack nodes.
     340
     341==== Procedure ====
     342The following operations are to be executed:
     343 1. As Experimenter1, Determine BBN compute resources and define RSpec.
     344 2. Determine remote meso-scale compute resources and define RSpec.  (Modified for one aggregate and no meso-scale)
     345 3. Define a request RSpec for !OpenFlow network resources at the BBN GRAM AM.
     346 4. Define a request RSpec for !OpenFlow network resources at the remote I2 Meso-scale site. (Rack nodes will replace remote meso-scale.)
     347 5. Define a request RSpec for the !OpenFlow Core resources
     348 6. Create the first slice
     349 7. Create a sliver for the BBN compute resources.
     350 8. Create a sliver at the I2 meso-scale site using VMOC at site.  (Modified for one aggregate and no meso-scale)
     351 9. Create a sliver at of the BBN GRAM AM.
     352 10. Create a sliver for the !OpenFlow resources in the core. (Modified for one aggregate and no meso-scale)
     353 11. Create a sliver for the meso-scale compute resources. (Modified for one aggregate and no meso-scale)
     354 11. Log in to each of the compute resources and send traffic to the other end-point.
     355 12. Verify that traffic is delivered to target.
     356 13. Review baseline, GMOC, and meso-scale monitoring statistics.  (Not possible in current version.)
     357 14. As Experimenter2, determine BBN compute resources and define RSpec.
     358 15. Determine remote meso-scale compute resources and define RSpec.
     359 16. Define a request RSpec for !OpenFlow network resources at the BBN GRAM AM.
     360 17. Define a request RSpec for !OpenFlow network resources at the remote NLR Meso-scale site. (Rack nodes will replace remote meso-scale.)
     361 18. Define a request RSpec for the !OpenFlow Core resources (No core resources will be used in initial evaluation)
     362 19. Create the second slice
     363 20. Create a sliver for the BBN compute resources.
     364 21. Create a sliver at the meso-scale site using FOAM at site.
     365 22. Create a sliver at of the BBN GRAM AM.
     366 23. Create a sliver for the !OpenFlow resources in the core.
     367 24. Create a sliver for the meso-scale compute resources.
     368 25. Log in to each of the compute resources and send traffic to the other endpoint.
     369 26. As Experimenter2, insert flowmods and send packet-outs only for traffic assigned to the slivers.
     370 27. Verify that traffic is delivered to target according to the flowmods settings.
     371 28. Review baseline, GMOC, and monitoring statistics.  (Not possible in current version.)
     372 29. Stop traffic and delete slivers.
     373
     374=== GR-EXP-6: GRAM and Meso-scale Multi-site !OpenFlow Acceptance Test ===
     375
     376This test case normally includes three sites and three experiments, using resources in the BBN and Site2 GRAM racks as well as meso-scale resources, where the network resources are the core !OpenFlow-controlled VLANs. Each of the compute resources will exchange traffic with the others in its slice, over a wide-area Layer 2 data plane network connection, using Internet2 and NLR VLANs. In particular, the following slices will be set up for this test:
     377  * Slice 1: One GRAM VM at each of BBN and Site2.
     378  * Slice 2: Two GRAM VMs at Site2 and one VM and one bare metal node at BBN.
     379  * Slice 3: An GRAM VM at BBN, a PG node at BBN, and a meso-scale Wide-Area ProtoGENI (WAPG) node.
     380
     381The above topology will be requested within one rack.
     382
     383==== Test Topology ====
     384
     385This test uses this topology:
     386
     387[[Image(GRAMMultiSiteOpenFlowAcceptanceTest.jpg)]]
     388
     389Note: The two Site2 VMs in Experiment2 must be on the same experiment node. This is not the case for other experiments.
     390
     391
     392For the initial evaluation there is only one rack, so the test case is modified to have VMs on different servers rather than different racks. Here is the actual topology run:
     393
     394[[Image(GRAMMultiSiteOpenFlowAcceptanceTest-actual.jpg)]]
     395
     396''Evaluation Note:'' Test case is described for the original test case, actual procedure will be captures as part of test details available from the [https://superior.bbn.com/trac/bbn-rack/wiki/AcceptanceTestStatus Acceptance Test Status] page.
     397
     398
     399==== Prerequisites ====
     400This test has these prerequisites:
     401
     402 * Meso-scale sites are available for testing
     403 * BBN GRAM connectivity statistics are monitored at the GPO GRAM Monitoring site.
     404 * GENI Experimenter1, Experimenter2 and Experimenter3 accounts exist.
     405 * This test will be scheduled at a time when site contacts are available to address any problems.
     406 * Both GRAM aggregates can link to static VLANs.  (Modified for one aggregate)
     407 * Site's !OpenFlow VLAN is implemented and is known for this test. (Use VMOC allocated OF VLANs)
     408 * If available, baseline Monitoring is in place at each site, to ensure that any problems are quickly identified.
     409 * GMOC data collection for the meso-scale and GRAM rack resources is functioning for the !OpenFlow and traffic measurements required in this test.
     410 * An !OpenFlow controller is run by the experimenter and is accessible via DNS hostname (or IP address) and TCP port.
     411 * a PG !OpenFlow site is also added to the setup described in the diagram
     412
     413
     414''Evaluation Note:''  There is no GMOC data colleciton and PG Site  for initial GRAM evaluation.
     415
     416==== Procedure ====
     417
     418Do the following:
     419
     420 1. As Experimenter1, request !ListResources from BBN GRAM, Site2 GRAM, and from VMOC at I2 and NLR Site.
     421 2. Review !ListResources output from all AMs.
     422 3. Define a request RSpec for a VM at the BBN GRAM.
     423 4. Define a request RSpec for a VM at the Site2 GRAM. (only one site used)
     424 5. Define request RSpecs for !OpenFlow resources from BBN FOAM to access GENI !OpenFlow core resources.  (only one site used)
     425 6. Define request RSpecs for !OpenFlow core resources at I2 FOAM (only one site used)
     426 7. Define request RSpecs for !OpenFlow core resources at NLR FOAM.  (only one site used)
     427 8. Create the first slice.
     428 9. Create a sliver in the first slice at each AM, using the RSpecs defined above.
     429 10. Log in to each of the systems, verify IP address assignment. Send traffic to the other system, leave traffic running.
     430 11. As Experimenter2, define a request RSpec for one VM and one physical node at BBN GRAM.
     431 12. Define a request RSpec for two VMs on the same experiment node at Site2 GRAM.  (only one site used)
     432 11. Define request RSpecs for !OpenFlow resources from BBN FOAM to access GENI !OpenFlow core resources.  (only one site used)
     433 14. Define request RSpecs for !OpenFlow core resources at I2 FOAM. (only one site used)
     434 15. Define request RSpecs for !OpenFlow core resources at NLR FOAM.  (only one site used)
     435 16. Create a second slice.
     436 17. Create a sliver in the second slice at each AM, using the RSpecs defined above.
     437 18. Log in to each of the systems in the slice, and send traffic to each other systems; leave traffic running
     438 19. As Experimenter3, request !ListResources from BBN GRAM, BBN meso-scale FOAM, and FOAM at Meso-scale Site (Internet2 Site BBN and NLR site).  (only one site used)
     439 20. Review !ListResources output from all AMs.
     440 21. Define a request RSpec for a VM at the BBN GRAM.
     441 22. Define a request RSpec for a compute resource at the BBN meso-scale site.  (only one site used)
     442 23. Define a request RSpec for a compute resource at a meso-scale site. (only one site used)
     443 24. Define request RSpecs for !OpenFlow resources to allow connection from !OpenFlow BBN GRAM to Meso-scale !OpenFlow sites(BBN and second site TBD) (I2 and NLR).  (only one site used)
     444 25. If PG access to !OpenFlow is available, define a request RSpec for the PG !OpenFlow resource. (only one site used)
     445 26. Create a third slice.
     446 27. Create slivers that connects the Internet2 Meso-scale !OpenFlow site to the BBN GRAM Site, and the BBN Meso-scale site; and if available, to PG node.
     447 28. Log in to each of the compute resources in the slice, configure data plane network interfaces on any non-GRAM resources as necessary, and send traffic to each other systems; leave traffic running.
     448 29. Verify that all three experiment continue to run without impacting each other's traffic, and that data is exchanged over the path along which data is supposed to flow.
     449 30. Review baseline monitoring statistics and checks.
     450 31. As site administrator, identify all controllers that the BBN GRAM !OpenFlow switch is connected to.
     451 32. As Experimenter3, verify that traffic only flows on the network resources assigned to slivers as specified by the controller.
     452 33. Verify that no default controller, switch fail-open behavior, or other resource other than experimenters' controllers, can control how traffic flows on network resources assigned to experimenters' slivers.
     453 34. Set the hard and soft timeout of flowtable entries
     454 35. Get switch statistics and flowtable entries for slivers from the !OpenFlow switch.
     455 36. Get layer 2 topology information about slivers in each slice.
     456 37. Install flows that match only on layer 2 fields, and confirm whether the matching is done in hardware.
     457 38. If supported, install flows that match only on layer 3 fields, and confirm whether the matching is done in hardware.
     458 39. Run test for at least 4 hours.
     459 40. Review monitoring statistics and checks as above.
     460 41. Delete slivers.
     461
     462Documentation:
     463 1. Verify access to documentation about which !OpenFlow actions can be performed in hardware.
     464
     465=== GR-EXP-7: Click Router Experiment Acceptance Test ===
     466
     467This test case uses a [http://read.cs.ucla.edu/click/click Click] modular router experiment with GRAM VM nodes. The scenario uses 2 VMs as hosts and 4 VMs as Click Routers and is based on the following [http://groups.geni.net/geni/wiki/ClickExampleExperiment Click example] experiment, although unlike the example, this test case uses VMs and it runs the Click router module in user space. 
     468
     469==== Test Topology ====
     470
     471This test uses this topology:
     472
     473[[Image(GRAMClickRouterAcceptanceTest.jpg)]]
     474
     475Note: Two VMs will be requested on the same physical worker node at each rack site for the user-level Click Router .
     476
     477For the initial evaluation there is only one rack, so the test case is modified to have VMs on different servers rather than different racks. Here is the actual topology run:
     478
     479[[Image(GRAMClickRouterAcceptanceTest-actual.jpg)]]
     480
     481''Evaluation Note:'' Test case is described for the original test case, actual procedure will be captures as part of test details available from the [https://superior.bbn.com/trac/bbn-rack/wiki/AcceptanceTestStatus Acceptance Test Status] page. The test case will be run within one rack.
     482
     483
     484==== Prerequisites ====
     485This test has these prerequisites:
     486 
     487 * TBD
     488
     489==== Procedure ====
     490
     491Do the following:
     492
     493    1. As Experimenter1, request ListResources from BBN GRAM
     494   2. Review ListResources
     495   3. Define a request RSpec for six VMs at BBN GRAM
     496   4. Create slice
     497   5. Create a sliver
     498   6. Install Click router
     499   7. Determine Click router settings
     500   8. Run the user-level Click router
     501   9. Log in to Host1 and send traffic to host2
     502   10. Review Click logs on each Click router
     503   11. Delete slivers
     504
     505== Additional Administration Acceptance Tests ==
     506
     507These tests will be performed as needed after the administration baseline tests complete successfully.  For example, the Software Update Test will be performed at least once when the rack team provides new software for testing.  We expect these tests to be interspersed with other tests in this plan at times that are agreeable to the GPO and the participants, not just run in a block at the end of testing.  The goal of these tests is to verify that sites have adequate documentation, procedures, and tools to satisfy all GENI site requirements.
     508
     509=== GR-ADM-3: Full Rack Reboot Test ===
     510
     511In this test, a full rack reboot is performed as a drill of a procedure which a site administrator may need to perform for site maintenance.
     512
     513''Note: this test must be run using the BBN rack because it requires physical access.''
     514
     515''Evaluation note:'' Can this be executed for the BBN GRAM rack?
     516
     517==== Procedure ====
     518
     519 1. Review relevant rack documentation about shutdown options and make a plan for the order in which to shutdown each component.
     520 2. Cleanly shutdown and/or hard-power-off all devices in the rack, and verify that everything in the rack is powered down.
     521 3. Power on all devices, bring all logical components back online, and use monitoring and comprehensive health tests to verify that the rack is healthy again.
     522
     523=== GR-ADM-4: Emergency Stop Test ===
     524
     525In this test, an Emergency Stop drill is performed on a sliver in the rack.
     526
     527==== Prerequisites ====
     528
     529 * GMOC's updated Emergency Stop procedure is approved and published on a public wiki.
     530 * GRAM's procedure for performing a shutdown operation on any type of sliver in an GRAM rack is published on a public wiki or on a protected wiki that all GRAM site administrators (including GPO) can access.
     531 * An Emergency Stop test is scheduled at a convenient time for all participants and documented in GMOC ticket(s).
     532 * A test experiment is running that involves a slice with connections to at least one GRAM rack compute resource.
     533
     534''Evaluation note:'' Emergency stop is not expected to be supported for the initial evaluation.
     535
     536
     537==== Procedure ====
     538
     539 * A site administrator reviews the Emergency Stop and sliver shutdown procedures, and verifies that these two documents combined fully document the campus side of the Emergency Stop procedure.
     540 * A second administrator (or the GPO) submits an Emergency Stop request to GMOC, referencing activity from a public IP address assigned to a compute sliver in the rack that is part of the test experiment.
     541 * GMOC and the first site administrator perform an Emergency Stop drill in which the site administrator successfully shuts down the sliver in coordination with GMOC.
     542 * GMOC completes the Emergency Stop workflow, including updating/closing GMOC tickets.
     543
     544=== GR-ADM-5: Software Update Test ===
     545
     546In this test, we update software on the rack as a test of the software update procedure.
     547
     548==== Prerequisites ====
     549
     550Minor updates of system packages for all infrastructure OSes, GRAM local AM software, and VMOC are available to be installed on the rack.  This test may need to be scheduled to take advantage of a time when these updates are available.
     551
     552==== Procedure ====
     553
     554 * A BBN site administrator reviews the procedure for performing software updates of GENI and non-GENI software on the rack.  If there is a procedure for updating any version tracking documentation (e.g. a wiki page) or checking any version tracking tools, the administrator reviews that as well.
     555 * Following that procedure, the administrator performs minor software updates on rack components, including as many as possible of the following (depending on availability of updates):
     556   * At least one update of a standard (non-GENI) package on each of the control and compute node.  (GPO will look for a package which has a security vulnerability listed in [http://www.freebsd.org/ports/portaudit/ the portaudit database].)
     557   * At least one update of a standard (non-GENI) system package on the VMOC VM.
     558   * At least one update of a standard (non-GENI) system package on the VM server host OS.
     559   * An update of GRAM local AM software on control node.
     560   * An update of VMOC software
     561 * The admin confirms that the software updates completed successfully
     562 * The admin updates any appropriate version tracking documentation or runs appropriate tool checks indicated by the version tracking procedure.
     563
     564=== GR-ADM-6: Control Network Disconnection Test ===
     565
     566In this test, we disconnect parts of the rack control network or its dependencies to test partial rack functionality in an outage situation.
     567
     568''Note: this test must be performed on the BBN rack because GPO will modify configuration on the control plane router and switch upstream from the rack in order to perform the test.''
     569
     570==== Procedure ====
     571
     572 * Simulate an outage of ???? by inserting a firewall rule on the BBN router blocking the rack from reaching it.  Verify that an administrator can still access the rack, that rack monitoring to GMOC continues through the outage, and that some experimenter operations still succeed.
     573 * Simulate an outage of each of the rack server host and control plane switch by disabling their respective interfaces on the BBN's control network switch.  Verify that GPO, GRAM, and GMOC monitoring all see the outage.
     574
     575''Evaluation Note:'' The simulated outage does not apply to initial evaluation, there will be no monitoring by GMOC. Also there is no GRAM SNMP polling. 
     576
     577
     578=== GR-ADM-7: Documentation Review Test ===
     579
     580Although this is not a single test ''per-se'', this section lists required documents that the rack teams will write.  Draft documents should be delivered prior to testing of the functional areas to which they apply.  Final documents must be deliveredto be made available for non-developer sites.  Final documents will be public, unless there is some specific reason a particular document cannot be public (e.g. a security concern from a GENI rack site). 
     581
     582==== Procedure ====
     583
     584Review each required document listed below, and verify that:
     585 * The document has been provided in a public location (e.g. the GENI wiki, or any other public website)
     586 * The document contains the required information.
     587 * The documented information appears to be accurate.
     588
     589''Note: this tests only the documentation, not the rack behavior which is documented.  Rack behavior related to any or all of these documents may be tested elsewhere in this plan.''
     590
     591Documents to review:
     592 * Pre-installation document that lists specific minimum requirements for all site-provided services for potential rack sites (e.g. space, number and type of power plugs, number and type of power circuits, cooling load, public addresses, NLR or Internet2 layer2 connections, etc.).  This document should also list all standard expected rack interfaces (e.g. 10GBE links to at least one research network).
     593 * Summary GENI rack parts list, including vendor part numbers for "standard" equipment intended for all sites (e.g. a VM server) and per-site equipment options (e.g. transceivers, PDUs etc.), if any.  This document should also indicate approximately how much headroom, if any, remains in the standard rack PDUs' power budget to support other equipment that sites may add to the rack.
     594 * Procedure for identifying the software versions and system file configurations running on a rack, and how to get information about recent changes to the rack software and configuration.
     595 * Explanation of how and when software and OS updates can be performed on a rack, including plans for notification and update if important security vulnerabilities in rack software are discovered.
     596 * Description of the GENI software running on a standard rack, and explanation of how to get access to the source code of each piece of standard GENI software.
     597 * Description of all the GENI experimental resources within the rack, and what policy options exist for each, including: how to configure rack nodes as bare metal vs. VM server, what options exist for configuring automated approval of compute and network resource requests and how to set them, how to configure rack aggregates to trust additional GENI slice authorities, and whether it is possible to trust local users within the rack.
     598 * Description of the expected state of all the GENI experimental resources in the rack, including how to determine the state of an experimental resource and what state is expected for an unallocated bare metal node.
     599 * Procedure for creating new site administrator and operator accounts.
     600 * Procedure for changing IP addresses for all rack components.
     601 * Procedure for cleanly shutting down an entire rack in case of a scheduled site outage.
     602 * Procedure for performing a shutdown operation on any type of sliver on a rack, in support of an Emergency Stop request.
     603 * Procedure for performing comprehensive health checks for a rack (or, if those health checks are being run automatically, how to view the current/recent results).
     604 * Technical plan for handing off primary rack operations to site operators at all sites.
     605 * Per-site documentation.  This documentation should be prepared before sites are installed and kept updated after installation to reflect any changes or upgrades after delivery.  Text, network diagrams, wiring diagrams and labeled photos are all acceptable for site documents.  Per-site documentation should include the following items for each site:
     606   1.  Part numbers and quantities of PDUs, with NEMA input power connector types, and an inventory of which equipment connects to which PDU.
     607   2.  Physical network interfaces for each control and data plane port that connects to the site's existing network(s), including type, part numbers, maximum speed etc. (eg. 10-GB-SR fiber)
     608   3.  Public IP addresses allocated to the rack, including: number of distinct IP ranges and size of each range, hostname to IP mappings which should be placed in site DNS, whether the last-hop routers for public IP ranges subnets sit within the rack or elsewhere on the site, and what firewall configuration is desired for the control network.
     609   4.  Data plane network connectivity and procedures for each rack, including core backbone connectivity and documentation, switch configuration options to set for compatibility with the L2 core, and the site and rack procedures for connecting non-rack-controlled VLANs and resources to the rack data plane.  A network diagram is highly recommended (See existing !OpenFlow meso-scale network diagrams on the GENI wiki for examples.)
     610
     611== Additional Monitoring Acceptance Tests ==
     612
     613These tests will be performed as needed after the monitoring baseline tests complete successfully.  For example, the GMOC data collection test will be performed during the GRAM Network Resources Acceptance test, where we already use the GMOC for meso-scale !OpenFlow monitoring.  We expect these tests to be interspersed with other tests in this plan at times that are agreeable to the GPO and the participants, not just run in a block at the end of testing.  The goal of these tests is to verify that sites have adequate tools to view and share GENI rack data that satisfies all GENI monitoring requirements.
     614
     615=== GR-MON-4: Infrastructure Device Performance Test ===
     616
     617This test verifies that the rack head node performs well enough to run all the services it needs to run.
     618
     619==== Procedure ====
     620
     621While experiments involving GRAM-controlled !OpenFlow slivers and compute slivers are running:
     622 * View !OpenFlow control monitoring at GMOC and verify that no monitoring data is missing
     623 * View VLAN 1750 data plane monitoring, which pings the rack's interface on VLAN 1750, and verify that packets are not being dropped
     624 * Verify that the CPU idle percentage on the server host and the !OpenFlow Controller VMs are both nonzero.
     625
     626''Evaluation note:'' There will no GMOC monitoring, but system data will be gathered for the Infrastructure hosts".
     627
     628
     629=== GR-MON-5: GMOC Data Collection Test ===
     630
     631This test verifies the rack's submission of monitoring data to GMOC.
     632
     633''Evaluation note:'' There will no GMOC monitoring.
     634
     635==== Procedure ====
     636
     637View the dataset collected at GMOC for the BBN and Site2 GRAM racks.  For each piece of required data, attempt to verify that:
     638 * The data is being collected and accepted by GMOC and can be viewed at gmoc-db.grnoc.iu.edu
     639 * The data's "site" tag indicates that it is being reported for the GRAM rack located at the `gpolab` or GRAM site2 site (as appropriate for that rack).
     640 * The data has been reported within the past 10 minutes.
     641 * For each piece of data, either verify that it is being collected at least once a minute, or verify that it requires more complicated processing than a simple file read to collect, and thus can be collected less often.
     642
     643Verify that the following pieces of data are being reported:
     644 * Is each of the rack GRAM and VMOC AMs reachable via the GENI AM API right now?
     645 * Is each compute or unbound VLAN resource at each rack AM online?  Is it available or in use?
     646 * Sliver count and percentage of rack compute and unbound VLAN resources in use.
     647 * Identities of current slivers on each rack AM, including creation time for each.
     648 * Per-sliver interface counters for compute and VLAN resources (where these values can be easily collected).
     649 * Is the rack data plane switch online?
     650 * Interface counters and VLAN memberships for each rack data plane switch interface
     651 * MAC address table contents for shared VLANs which appear on rack data plane switches
     652 * Is each rack experimental node online?
     653 * For each rack experimental node configured as an OpenStack VM server, overall CPU, disk, and memory utilization for the host, current VM count and total VM capacity of the host.
     654 * For each rack experimental node configured as an OpenStack VM server, interface counters for each data plane interface.
     655 * Results of at least one end-to-end health check which simulates an experimenter reserving and using at least one resource in the rack.
     656
     657Verify that per-rack or per-aggregate summaries are collected of the count of distinct users who have been active on the rack, either by providing raw sliver data containing sliver users to GMOC, or by collecting data locally and producing trending summaries on demand.
     658
     659= Test Methodology and Reporting =
     660
     661== Test Case Execution ==
     662 1. All test procedure steps will be executed until there is a blocking issue.
     663 2. If a blocking issue is found for a test case, testing will be stopped for that test case.
     664 3. Testing focus will shift to another test case while waiting for a solution to a blocking issue.
     665 4. If a non-blocking issue is found, testing will continue toward completion of the procedure.
     666 4. When a software resolution or workaround is available for a blocking issue, the test impacted by the issue is re-executed until it can be completed successfully.
     667 5. Supporting documentation will be used whenever available.
     668 6.  Questions that were not answered by existing documentation are to be gathered during the acceptance testing and published, as we did for the rack design.
     669
     670== Issue Tracking ==
     671 1. All issues discovered in acceptance testing regardless of priority are to be tracked in a bug tracking system.
     672 2. The bug tracking system to be used is the [https://superior.bbn.com/trac/bbn-rack/query?status=accepted&status=assigned&status=new&status=reopened&component=test  GRAM trac] using the "test" component.
     673 3. All types of issues encountered (documentation error, software bug, missing features, missing documentation, etc.) are to tracked.
     674 4. All unresolved issues will be reviewed and published at the end of the acceptance test as part of the acceptance test report.
     675
     676== Status Updates and Reporting ==
     677 1. A periodic status update will be generated, as the acceptance test plan is being executed, or as needed.
     678 2. Periodic (once per-day) status update will be posted to the rack team mail list (gram-dev@bbn.com).
     679 3. Upon acceptance test completion, all findings and unresolved issue will be captured in an [https://superior.bbn.com/trac/bbn-rack/wiki/AcceptanceTestReport Acceptance Test Report].
     680 4. Supporting  configuration and RSpecs used in testing will be part of the acceptance test report or checked into a specified repository.
     681
     682== Test Case Naming ==
     683
     684The test case in this plan follow a naming convention that uses ''GR-XXX-Y'' where ''GR'' is GRAM and ''XXX'' may equal any of the following: ''ADM'' for Administrative or ''EXP'' for Experimenter or ''MON'' for Monitoring. The final component of the test case name is the ''Y'', which is the test case number.   
     685
     686= Requirements Validation =
     687
     688This acceptance test plan verifies Integration (C), Monitoring (D), Experimenter (G) and Local Aggregate (F) requirements. As part of the test planing process, the GPO Infrastructure group mapped each of the [wiki:GeniRacks GENI Racks Requirements] to a set of validation criteria. For a detailed look at the validation criteria see the GENI Racks [wiki:GENIRacksProjects/AcceptanceTestsCriteria Acceptance Criteria] page.
     689
     690This plan does not validate any Software (B) requirements, as they are validated by the GPO Software team's [http://trac.gpolab.bbn.com/gcf/wiki/AmApiAcceptanceTests GENI AM API Acceptance tests] suite.
     691
     692Some requirements are not verified in this test plan:
     693 * C.2.a "Support at least 100 simultaneous active (e.g. actually passing data) layer 2 Ethernet VLAN connections to the rack. For this purpose, VLAN paths must terminate on separate rack VMs, not on the rack switch."
     694  * Production Aggregate Requirements (E)
     695
     696= Glossary =
     697
     698Following is a glossary for terminology used in this plan, for additional terminology definition see the [wiki:GeniGlossary GENI Glossary] page.
     699
     700  * People:
     701   * Experimenter: A person accessing the rack using a GENI credential and the GENI AM API.
     702   * Administrator: A person who has fully-privileged access to, and responsibility for, the rack infrastructure (servers, network devices, etc) at a given location.
     703   * Operator: A person who has unprivileged/partially-privileged access to the rack infrastructure at a given location, and has responsibility for one or a few particular functions.
     704
     705 * Baseline Monitoring: Set of monitoring functions which show aggregate health for VMs and switches and their interface status, traffic counts for interfaces and VLANs. Includes resource availability and utilization.
     706
     707 * Experimental compute resources:
     708   * VM: An experimental compute resource which is a virtual machine located on a physical machine in the rack.
     709   * Bare metal Node: An experimental exclusive compute resource which is a physical machine usable by experimenters without virtualization.
     710   * Compute Resource: Either a VM or a bare metal node. 
     711 * Experimental compute resource components:
     712   * logical interface: A network interface seen by a compute resource (e.g. a distinct listing in `ifconfig` output).  May be provided by a physical interface, or by virtualization of an interface.
     713
     714 * Experimental network resources:
     715   * VLAN: A data plane VLAN, which may or may not be !OpenFlow-controlled.
     716   * Bound VLAN: A VLAN which an experimenter requests by specifying the desired VLAN ID.  (If the aggregate is unable to provide access to that numbered VLAN or to another VLAN which is bridged to the numbered VLAN, the experimenter's request will fail.)
     717   * Unbound VLAN: A VLAN which an experimenter requests without specifying a VLAN ID.  (The aggregate may provide any available VLAN to the experimenter.)
     718
     719   * Exclusive VLAN: A VLAN which is provided for the exclusive use of one experimenter.
     720   * Shared VLAN: A VLAN which is shared among multiple experimenters.
     721
     722We make the following assumptions about experimental network resources:
     723 * Unbound VLANs are always exclusive.
     724 * Bound VLANs may be either exclusive or shared, and this is determined on a per-VLAN basis and configured by operators.
     725 * Shared VLANs are always !OpenFlow-controlled, with !OpenFlow providing the slicing between experimenters who have access to the VLAN.
     726 * If a VLAN provides an end-to-end path between multiple aggregates or organizations, it is considered "shared" if it is shared anywhere along its length --- even if only one experimenter can access the VLAN at some particular aggregate or organization (for whatever reason), a VLAN which is shared anywhere along its L2 path is called "shared".
     727 
     728{{{
     729#!html
     730Email <a href="mailto:help@geni.net"> help@geni.net </a> for GENI support or email <a href="mailto:luisa.nevers@bbn.com">me</a> with feedback on this page!
     731}}}