Changes between Version 2 and Version 3 of GENIRacksHome/AcceptanceTests/ExogeniAcceptanceTestsPlan


Ignore:
Timestamp:
03/14/12 08:44:09 (7 years ago)
Author:
lnevers@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIRacksHome/AcceptanceTests/ExogeniAcceptanceTestsPlan

    v2 v3  
    1 
    2 
    3 The ExoGENI System Acceptance Test plan is not yet defined, but will be placed here when available...
     1[[PageOutline(1-2)]]
     2
     3= ExoGENI Acceptance Test Plan =
     4
     5This page captures the GENI Racks Acceptance Test Plan to be executed for the ExoGENI project. Tests in this plan are based on the [wiki:GENIRacksProjects/ExogeniUseCases ExoGENI Use Cases] and each test provides a mapping to the [http://groups.geni.net/geni/wiki/GeniRacks GENI Rack Requirements] that it validates.  This plan defines tests that cover the following types of requirements: Integration, Monitoring and Experimenter requirements.  The GPO Software team has implemented the [http://groups.geni.net/syseng/wiki/SwAcceptanceAMAPIv1 GENI AM API Acceptance tests] suite to verify Software requirements which are not covered in this plan.  The GPO Infrastructure team is responsible for performing the tests described in this page.  Some tests on this page are placeholders, and will be more fully defined soon.  The tests found here are all high-priority functional tests.  The ExoGENI racks support more functions than the priority ones in these system tests.
     6
     7== Assumptions ==
     8
     9The following environment prerequisites are are assumed for all tests described in this plan:
     10
     11 * GPO ProtoGENI credentials from https://pgeni.gpolab.bbn.com are used for all tests.
     12 * GPO ProtoGENI is the Slice Authority for all tests.
     13 * Resources for each test will be requested from the local broker whenever possible.
     14 * The ORCA Actor Registry is functional, and allows ExoGENI racks to communicate.
     15 * ORCA RSpec/NDL conversion service is available to convert GENI requests.
     16
     17The following runtime assumptions are made for all test cases:
     18
     19 * Compute resources are VMs unless otherwise stated.
     20 * All Aggregate Manager (AM) requests are made via the omni commnad line tool which is part of the GENI Control Framework (GCF) package.
     21 * In all scenarios, one experiment is always equal to one slice.
     22 * FOAM is used as an aggregate manager in each of the !OpenFlow test cases.
     23 * NLR is used for Layer 2 Static VLANs, although, if available, it it preferable to run this test over Internet2.
     24
     25Test Traffic Profile:
     26
     27 * Experiment traffic includes UDP and TCP data streams at low rates to ensure end-to-end delivery
     28 * Traffic exchanged is used to verify that the appropriate channel is used for the test described.
     29 * Performance is not a goal of these acceptance tests.
     30 
     31= Acceptance Tests Descriptions =
     32
     33This section describes each acceptance test by defining goals, topology used, and outlining test procedure.  Test cases are listed by priority in sections below; the cases that verify the largest number of requirement criteria are typically listed at a higher priority. The baseline tests are executed first to verify that monitoring and administrative functions are available; this will allow the execution of the use-case-based test topologies. Additional monitoring and administrative tests are described in later sections which will be run before the completion of the acceptance test effort.
     34
     35== Administration Baseline Acceptance Test ==
     36
     37Administrative Acceptance tests will verify support of administrative management tasks and focus on verifying priority functions for each of the rack components. The set of administrative features described in this section will be verified initially. Additional [wiki:LuisaSandbox/ExogeniAcceptanceTestPlan#AdditionalAdministrationAcceptanceTests administrative] test are described in a later section which will be executed before the acceptance test completion. 
     38
     39=== Prerequisite ===
     40 - Administrative accounts are available for GPO staff on the GPO ExoGENI rack
     41 - Procedures or tools are available for changing IP addresses for all addressable rack components.
     42
     43=== Procedure ===
     44
     45Administrative tasks:
     46 1. As Administrator execute root/admin commands, and create accounts for supported user types (admin, oper, user)
     47 2. As Operator (just created) execute root/admin commands and create two user accounts.
     48 3. As User (just created) make sure that the following are not allowed:  admin/operator commands and administrative tools.
     49 4. As Administrator, identify all software versions and view all configurations running on all GENI rack components
     50 5. Verify that access to rack components with ssh password is not allowed, nor for via any unencrypted login protocol.
     51 6. Verify ssh public-key access to all rack components that support ssh.
     52 7. As Administrator delete user and operator accounts created in steps above.
     53 8. Verify site administrator has local and remote console access to each rack component.
     54 9. Verify remote power cycling for each rack component.
     55 10. Verify ability to determine MAC address for all physical host interfaces, all network device interfaces, all active experimental VMs, and all recently-terminated experimental VMs.
     56
     57Validate bare-metal support using available procedures:
     58 1. Use available administrative function to determine which nodes can be used as bare-metal.
     59 2. Verify ability to reserve and boot a bare-metal running Microsoft Windows
     60 3. Release bare-metal resource.
     61 4. View a list of OS images which can be loaded on bare-metal nodes.
     62 5. Modify resource allocation to add 1 additional bare-metal (2 total) for use in acceptance test.
     63
     64Validate Baseline Policy:
     65 1. Verify ability for site administrator to set a policy for a Network Resource (not !OpenFlow) to allow anyone to access resource.
     66 2. Verify ability for site administrator to set a policy for a Network Resource (not !OpenFlow) to ban a user from accessing resources.
     67
     68Validate Logging features:
     69 1. Review log and reports generated during user creation tests to verify user counts reflect expectations.
     70 2. Verify log rolling is available
     71 3. Verify remote logging is available.
     72 4. Verify slice and sliver (AM transactions) can be part of remote logging.
     73 5. Verify remote logging is possible before roll over log deletion.
     74 6. Verify logging show all account types (admin, oper, and user) loggin into the rack.
     75 7. Verify logging captures sudo access.
     76
     77Validate Security:
     78 1. Monitor for software updates and patches that occur as part of the testing by reviewing logging and configuration management.
     79 2. Record turnaround time for !OpenFlow, Compute Resources and Network Resources updates.
     80 3. Monitor vulnerability alerts from Rack Team.
     81
     82== Monitoring Baseline Acceptance Test ==
     83The Monitoring Baseline acceptance is the minimum monitoring required to observe rack components. Baseline monitoring is executed within each of the tests that are based on use cases. Additional [wiki:LuisaSandbox/ExogeniAcceptanceTestPlan#AdditionalMonitoringAcceptanceTests monitoring] tests are define in a later section to complete the validation in this section.
     84
     85
     86=== Prerequisite ===
     87 - Access to Nagios statistics in the rack is available to GPO staff.
     88
     89=== Procedure ===
     90Verify the following monitoring features are available by accessing nagios or whatever source is gathering statistics.  Note, that this is an initial approach and as evaluation progresses, evaluation will transition to reviewing this data via monitoring systems at GPO and/or at GMOC.
     91
     92 1. Aggregate availability via the AM API.
     93 2. State of available resources (in use, available, down/unknown)
     94 3. Overall sliver count 
     95 3. Resource utilization level on the aggregate
     96 4. Status and utilization of each sliver active on the aggregate (minimum: sliver uptime, sliver resource utilization, performance data as available)
     97 3. Network devices by reviewing device liveness.
     98 5. Interface traffic counters (including types of traffic e.g. broadcast/multicast)
     99 6. List of VLANs defined on interfaces, MAC address tables on data plane VLANs
     100 7. At each available ExoGENI site, access the static (always-available) interface to verify Meso-scale connection.
     101
     102== ExoGENI Single Site Acceptance Test - Use Case 1  ==
     103
     104This is a one site test  run on the GPO ExoGENI rack, and it includes three experiments. Each experiment requests local compute resources which generate bidirectional traffic over a Layer 2 dataplane network connection. The experiment is executed in two parts: Part 1 sets up two concurrent experiments, and Part 2 sets up an experiment that validates compute resource limits.
     105
     106
     107=== Test Topology ===
     108
     109This test uses this topology:
     110
     111[[Image(ExoGENISingleSiteAcceptanceTest.jpg)]]
     112
     113=== Prerequisites ===
     114
     115This test has these prerequisites:
     116
     117 * A GPO Ubuntu image has been uploaded to the [http://geni-images.renci.org/images/ RENCI image repository] for use in this test.
     118 * Traffic generation tools may be part of image or installed at experiment runtime.
     119 * Administrative accounts have been created for GPO staff on the GPO rack.
     120 * Create Experimenter account to be used in test.
     121 * Baseline Monitoring is in place for the entire GPO site, to ensure that any problems are quickly identified.
     122
     123=== Procedure ===
     124
     125For Part 1, do the following:
     126
     127 1. As Experimenter1, request !ListResources from GPO ExoGENI.
     128 2. Review !ListResources output, and identify available resources.
     129 3. Review images available at [http://geni-images.renci.org/images/ RENCI image repository].
     130 4. Define a request RSpec for two VMs, each with a CentOS image.
     131 5. Create the first slice.
     132 6. Create a sliver in the first slice, using the RSpec defined in step 4.
     133 7. Log in to each of the systems, and send traffic to the other system.
     134 8. As Experimenter2, request !ListResources from GPO ExoGENI.
     135 9. Define a request RSpec for two bare-metal nodes, both using the image uploaded by GPO.
     136 10. Create the second slice.
     137 11. Create a sliver in the second slice, using the RSpec defined in step 8.
     138 12. Log in to each of the systems, and send traffic to the other system.
     139 13. Verify that experimenters 1 and 2 cannot use the control plane to access each other's resources (e.g. via unauthenticated SSH, shared writable filesystem mount)
     140 14. Delete slivers.
     141
     142For Part 2, do the following:
     143
     144 1. Request !ListResources from GPO ExoGENI.
     145 2. Write an RSpec that requests 100 VMs.
     146 3. Create a new slice.
     147 4. Create a sliver in the new slice, using the RSpec defined in step 2.
     148 5. Log into several of the VMs, and send traffic to several other systems.
     149 6. Step up traffic rates to verify VMs continue to operate with realistic traffic loads.
     150 7. Review system statistics and VM isolation (does not include network isolation)
     151 8. Review monitoring statistics and check for resource status for CPU, disk, memory utilization, interface counters, uptime, process counts, and active user counts.
     152 11. Verify that each VM has a distinct MAC address for that interface
     153 12 Verify that VMs' MAC addresses are learned on the dataplane switch
     154  9. Delete sliver.
     155
     156== ExoGENI Multi-site Acceptance Test - Use Case 2 == 
     157
     158This test includes two sites and two experiments, using resources in the GPO and RENCI ExoGENI racks. Each of the compute resources will exchange traffic with the others in its slice, over a wide-area Layer 2 dataplane network connection, using NLR and/or Internet2 Static VLANs.
     159
     160=== Test Topology ===
     161
     162This test uses this topology:
     163
     164[[Image(ExoGENIMultiSiteAcceptanceTest.jpg)]]
     165
     166=== Prerequisites ===
     167
     168This test has these prerequisites:
     169
     170 * GPO ExoGENI connectivity statistics will be monitored at the [http://monitor.gpolab.bbn.com/connectivity/exogeni.html GPO ExoGENI Monitoring] site.
     171 * This test will be scheduled at a time when site contacts are available to address any problems.
     172 * Administrative accounts have been created for GPO staff at each rack.
     173 * The VLANs used will be selected from the pool of VLANs pre-allocated to ExoGENI by NLR and/or Internet2.
     174 * Baseline Monitoring is in place at each site, to ensure that any problems are quickly identified.
     175 * ExoGENI manages private address allocation for the !OpenFlow endpoint in this test except for the ones that are not ExoGENI.
     176
     177
     178=== Procedure ===
     179
     180Do the following:
     181
     182 1. Request !ListResources from GPO ExoGENI.
     183 2. Request !ListResources from RENCI ExoGENI.
     184 3. Review !ListResources output from both AMs.
     185 4. Define a request RSpec for a VM at GPO ExoGENI.
     186 5. Define a request RSpec for a VM at RENCI ExoGENI.
     187 6. Create the first slice.
     188 7. Create a sliver at each ExoGENI aggregate using the RSpecs defined above.
     189 8. Log in to each of the systems, and send traffic to the other system, leave traffic running.
     190 9. Request !ListResources from GPO ExoGENI and RENCI ExoGENI.
     191 10. Define a request RSpec for one VM and one bare-metal node running at GPO ExoGENI.
     192 11. Define a request RSpec for two VMs at RENCI ExoGENI.
     193 12. Create a second slice.
     194 13. In the second slice, create a sliver at each ExoGENI aggregate using the RSpecs defined above.
     195 14. Log in to each of the systems, and send traffic to the other system.
     196 15. Verify traffic handling, VM isolation, and address assignment.
     197 16. Review baseline monitoring statistics.
     198 17. Run test for at least 4 hours.
     199 18. Review baseline monitoring statistics.
     200 19. Delete slivers.
     201
     202Note: After initial successful test run, this test will be revisited and will be re-run as a longevity test for a minimum of 24 hours.
     203
     204== ExoGENI Multi-site !OpenFlow Acceptance Test - Use Case 5 ==
     205
     206This test includes two sites and two experiments, using resources in the GPO and RENCI ExoGENI racks, where the network resources are the core !OpenFlow-controlled VLANs. Each of the compute resources will exchange traffic with the others in its slice, over a wide-area Layer 2 dataplane network connection, using NLR and/or Internet2 VLANs.
     207
     208
     209=== Test Topology ===
     210
     211This test uses this topology:
     212
     213[[Image(ExoGENIMultiSiteOpenFlowAcceptanceTest.jpg)]]
     214
     215
     216=== Prerequisites ===
     217
     218This test has these prerequisites:
     219
     220 * GPO ExoGENI connectivity statistics will be monitored at the [http://monitor.gpolab.bbn.com/connectivity/exogeni.html GPO ExoGENI Monitoring] site.
     221 * This test will be scheduled at a time when site contacts are available to address any problems.
     222 * The VLANs used will be selected from the pool of VLANs pre-allocated to ExoGENI by NLR and/or Internet2.
     223 * Site has delegated VLAN Range needed to run this test.
     224 * Baseline Monitoring is in place at each site, to ensure that any problems are quickly identified.
     225 * Meso-scale site is available for testing (If PG access to OF is available, PG will be used in place of a Meso-scale OF site.)
     226 * If Meso-scale site is used, then GMOC must be able to get monitoring data for experiment.
     227
     228=== Procedure ===
     229
     230Do the following:
     231
     232 1. Request !ListResources from GPO ExoGENI, RENCI ExoGENI, and FOAM at NLR and/or Internet2.
     233 2. Review !ListResources output from all AMs.
     234 3. Define a request RSpec for a VM at the GPO ExoGENI.
     235 4. Define a request RSpec for a VM at the RENCI ExoGENI.
     236 5. Define request RSpecs for !OpenFlow resources from NLR and/or Internet2.
     237 6. Create the first slice.
     238 7. Create a sliver in the first slice at each AM, using the RSpecs defined above.
     239 8. Log in to each of the systems, and send traffic to the other system, leave traffic running.
     240 9. Define a request RSpec for two VMs at GPO ExoGENI.
     241 10. Define a request RSpec for two VMs at RENCI ExoGENI.
     242 11. Create a second slice.
     243 12. Create a sliver in the second slice at each AM, using the RSpecs defined above.
     244 13. Log in to each of the systems in the slice, and send traffic to each other systems; leave traffic running
     245 14. Request !ListResources from GPO ExoGENI and FOAM at Meso-scale site.
     246 15. Review !ListResources output from all AMs.
     247 16. Define a request RSpec for a VM at the GPO ExoGENI.
     248 17. Define a request RSpec for a compute resource at a Meso-scale site.
     249 18. Define request RSpecs for !OpenFlow resources to allow connection from OF GPO ExoGENI to Meso-scale OF site.
     250 19. Create a third slice
     251 20. Create a sliver that connects the Meso-scale !OpenFlow site to the GPO ExoGENI Site.
     252 21. Log in to each of the systems in the slice, and send traffic to each other systems; leave traffic running
     253 22. Verify that all three experiment continue to run without impacting traffic, and that data is exchanged over the !OpenFlow datapath defined in RSpec.
     254 23. Review baseline monitoring statistics and checks
     255 24. As admin, verify ability to map all controllers associated to the !OpenFlow switch.
     256 25. Run test for at least 4 hours.
     257 26. Review monitoring statistics and checks as above.
     258 27. Delete all slivers at each ExoGENI AMs and at FOAM AMs.
     259
     260 
     261== ExoGENI and Meso-scale OpenFLow Interoperability Acceptance Test - Use Case 4 ==
     262
     263A two site experiment including !OpenFlow resources and Compute Resources at the GPO ExoGENI rack and a Meso-scale site (TBD).  Experiment will request a compute resource (VM) at each site to exchange bidirectional traffic. Requests be made for network resources at each FOAM Aggregate to allow traffic exchange over !OpenFlow VLANs. This is a low-priority test that will be executed if time permits.
     264
     265
     266=== Test Topology ===
     267
     268[[Image(ExoGENIMesoscaleOpenFlowAcceptanceTest.jpg)]]
     269
     270=== Prerequisites ===
     271 - Baseline Monitoring is in place at each of the site to ensure that any potential problems are quickly identified.
     272 - GPO ExoGENI connectivity statistics will be monitored at the [http://monitor.gpolab.bbn.com/connectivity/exogeni.html GPO ExoGENI Monitoring] site.
     273 - GMOC is receiving monitoring data for ExoGENI Racks.
     274 - This test will be scheduled to ensure that resource available and site contact is available in case of potential problems.
     275 - VLANs to be used are from the pre-allocated pool of ExoGENI VLANs.
     276
     277=== Procedure ===
     278The following operations are to be executed:
     279 1. Request !ListResources at the GPO ExoGENI AM.
     280 2. Request !ListResources at the GPO FOAM AM.
     281 3. Request !ListResources at the Meso-scale AM.
     282 4. Request !ListResources at the Meso-scale FOAM AM.
     283 5. Review !ListResources output from all aggregates.
     284 6. Define a request RSpec for one Compute Resource at the GPO ExoGENI AM.
     285 7. Define a request RSpec for one Compute Resource at the Meso-scale AM.
     286 8. Define a request RSpec for Network Resource at the GPO FOAM AM.
     287 9. Define a request RSpec for Network Resource at the Meso-scale FOAM AM.
     288 10. Create a slice
     289 11. Create a sliver at each ExoGENI and Meso-scale AMs using the RSpecs defined above.
     290 12. Create a sliver at each FOAM AM using the RSpecs defined above.
     291 13. Log in to each of the Compute Resources and send traffic to the other Host.
     292 14. Verify data is exchanged over !OpenFlow channel.
     293 15. Delete sliver at each ExoGENI AMs and at FOAM AMs.
     294 16. Delete slice (if supported).
     295
     296
     297== Additional Administration Acceptance Tests ==
     298
     299The following operations are to be executed:
     300 1. Review tools or procedures for change IP for each of the rack component types.
     301 2. Change IP addresses for each switch, VM, Bare Metal node, etc.
     302 3. View source code for any software covered by the GENI Intellectual Property Agreement that runs on the rack. Should be able to determine location of such source code in their public site documentation from the Rack Development Team.
     303
     304== Additional Monitoring Acceptance Tests ==
     305
     306Upon completion of the acceptance phase, the GPO will verify that all information available in the Baseline Monitoring Acceptance tests is available via the Operational monitoring portal at gmoc-db.grnoc.iu.edu. In addition to the baseline monitoring tests, the following features will be verified:
     307
     308 1. Operational monitoring data for the rack is available at gmoc-db.grnoc.iu.edu.
     309 2. The rack data's "site" tag in the GMOC database indicates the physical location (e.g. host campus) of the rack.
     310 3. Whenever the rack is operational, GMOC's database contains site data which is at most 10 minutes old.
     311 4. Any site variable which can be collected by reading a counter (i.e. which does not require system or network processing beyond a file read) is collected at least once a minute.
     312 5. All hosts which submit data to gmoc-db have system clocks which agree with gmoc-db's clock to within 45 seconds.  (GMOC is responsible for ensuring that gmoc-db's own clock is synchronized to an accurate time source.)
     313 6. The GMOC database contains data about whether each site AM has recently been reachable via the GENI AM API.
     314 7. The GMOC database contains data about the recent uptime and availability of each compute or pool VLAN resource at each rack AM.
     315 8. The GMOC database contains the sliver count and percentage of resources in use at each rack AM.
     316 9. The GMOC database contains the creation time of each sliver on each rack AM.
     317 10. If possible, the GMOC database contains per-sliver interface counters for each rack AM.
     318 11. The GMOC database contains data about whether each rack dataplane switch has recently been online.
     319 12. The GMOC database contains recent traffic counters and VLAN memberships for each rack dataplane switch interface.
     320 13.  The GMOC database contains recent MAC address table contents for static VLANs which appear on rack dataplane switches
     321 14. The GMOC database contains data about whether each experimental VM server has recently been online.
     322 15. The GMOC database contain overall CPU, disk, and memory utilization, and VM count and capacity, for each experimental VM server.
     323 16. The GMOC database contains overall interface counters for experimental VM server dataplane interfaces.
     324 17. The GMOC database contains recent results of at least one end-to-end health check which simulates an experimenter reserving and using at least one resource in the rack.
     325 18. A site administrator can locate current and recent CPU and memory utilization for each rack network device, and can find recent changes or errors in a log.
     326 19. A site administrator can locate current configuration of flowvisor, FOAM, and any other !OpenFlow services, and find logs of recent activity  and changes.
     327 20. For each infrastructure and experimental host, a site administrator can locate current and recent uptime, CPU, disk, and memory utilization, interface traffic counters, process counts, and active user counts.
     328 21. A site administrator can locate recent syslogs for all infrastructure and experimental hosts.
     329 22. A site administrator can locate information about the network reachability of all rack infrastructure which should live on the control network, and can get alerts when any rack infrastructure control IP becomes unavailable from the rack server host, or when the rack server host cannot reach the commodity internet.
     330 23. A site administrator can get information about the power utilization  of rack PDUs.
     331 24. Given a public IP address and port, a pool VLAN, or a sliver name, a site administrator or GMOC staffer can identify the email address of the experimenter who controlled that resource at a particular time.
     332 25. For trending purposes, per-rack or per-aggregate summaries are collected of the count of distinct users who have been active on a given rack.  Racks may provide raw sliver/user data to GMOC, or may produce their own trending summaries on demand.
     333 26. Meso-scale reachability testing can report on the recent liveness of the rack static VLANs by pinging a per-rack IP in each Meso-scale monitoring subnet.
     334
     335== Emergency Stop Acceptance test ==
     336
     337=== Prerequisite ===
     338 
     339 - GMOC delivers updated Emergency Stop procedure document. [add_link_when_available]
     340 - GPO writes a generic Emergency Stop procedure, based on the updated GMOC procedure. [add_link_when_available]
     341 - Emergency stop coordination has taken place to make sure that all site are aware of their role and that all expected steps are documented.
     342 
     343=== Procedure ===
     344
     345GPO executes the generic Emergency Stop procedure.
     346
     347
     348= Requirements Validation =
     349This section maps test cases to individual acceptance test criteria. It also documents the requirements that are not validated.
     350
     351== Validated Requirements Mappings ==
     352
     353=== Administration Acceptance Test Requirements Mapping ===
     354
     355 - (C.1.d) Ability to support Microsoft Windows on bare-metal node.
     356 - (C.3.a) Ability of Administrator account type to add/delete/modify 2 or more accounts.
     357 - (C.3.a) Ability of Operator account type to add/delete/modify for 2 or more accounts.
     358 - (C.3.a) Ability of User account type to be added/deleted/modified for 2 or more accounts.
     359 - (C.3.a) Ability of Administrator accounts have super user privileges on all rack node types (network, compute, misc.)
     360 - (C.3.a) Ability of Operator accounts have privileges on all rack node type (network, compute, misc.) to access to common operator functions such as: debug tools, emergency stop,<TBD>.
     361 - (C.3.a) User account types do no have access to Administrative functions.
     362 - (C.3.a) User accounts do not have access to Operator functions such as: debug tools, emergency stop, <TBD>
     363 - (C.3.a) Ability of Administrator, Operator, and User account types to use secure ssh Keys rather than password.
     364 - (C.3.b) Account access is not allowed for username/password control.
     365 - (C.3.b) Ability to support account access via ssh keys.
     366 - (C.3.c) Ability to remote power cycle rack components.
     367 - (C.3.e) Verify procedures and/or tools provided for changing IP addresses for all rack components types.
     368 - (C.3.f) Verify ability to remotely determine MAC addresses for all rack resources (including VMs)
     369 - (F.5) Ability of site support staff (and GENI operations) must be able to identify all software versions and view all configurations running on all GENI rack components once they are deployed. The rack users' experimental software running on the rack is exempt from this requirement.
     370 - (F.6) Ability of site support staff (and GENI operations) must be able to view source code for any software covered by the GENI Intellectual Property Agreement that runs on the rack. Rack teams should document the location of such source code in their public site documentation (e.g. on the GENI wiki).
     371 - (F.6)
     372
     373=== Monitoring Acceptance Test Requirements Mapping ===
     374 - (D.5.a) Ability to provide aggregate health via AM API.
     375 - (D.5.a) Ability to get resource types, counts, states and utilization.
     376 - (D.5.a) Ability to get the following for active slivers:  uptime, resource utilization, performance data.
     377 - (D.5.b) Ability to get network device status, traffic counters, traffic type.
     378 - (D.5.b) Ability to get network device VLANs by interface, MAC address tables for VLANs.
     379 - (D.8) Each rack has always-available interface on Meso-scale VLAN.
     380
     381
     382=== ExoGENI Single Site Acceptance Test - Use Case 1 ===
     383
     384 * (C.1.a) Ability to operate the advertised minium number of hosts in a rack simultaneously in multiple experiments.
     385 * (C.1.b) Ability to support at least one bare-metal node using a supported Linux OS.
     386 * (C.1.c) Ability to support multiple VMs simultaneously in a single rack.
     387 * (C.1.c) Ability to support multiple VMs and bare-metal nodes simultaneously in a single rack.
     388 * (C.1.c) Ability to support multiple  bare-metal nodes simultaneously in a single rack.
     389 * (C.3.b) Ability to support account access via SSH keys.
     390 * (D.5.c) Ability to monitor VM status for CPU, disk, memory utilization.
     391 * (D.5.c) Ability to monitor VM interface counters.
     392 * (D.6.b) Ability to monitor VMs: CPU, disk, and memory utilization interface traffic counters, uptime, process counts, and active user counts.
     393 * (D.6.b) Ability to monitor bare-metal nodes: CPU, disk, and memory utilization interface traffic counters, uptime, process counts, and active user counts.
     394 * (D.7) Ability of logging and reporting to capture active user counts per rack.
     395 * (G.1) Ability to get VMs with root/admin capabilities.
     396 * (G.1) Ability to get bare-metal nodes with root/admin capabilities.
     397 * (B.5) Support at least two different operating systems for compute resources 
     398 * (B.5.a) Provide images for experimenters 
     399 * (B.5.b) Advertise image availability in the advertisement RSpec   
     400
     401=== ExoGENI Multi-site Acceptance Test - Use Case 2 ===
     402
     403 * (C.1.a) Ability to isolate simultaneously used resources in an experiment.
     404 * (C.2.b) Ability to connect a single external VLAN to multiple VMs in the rack
     405 * (C.2.b) Ability to connect multiple external VLANs to multiple VMs in the rack
     406 * (C.2.c) Ability to simultaneously connect a single external VLAN to multiple VMs and bare-metal node.
     407 * (C.2.d) Ability to have unique addressing when multiple experiments are running.
     408 * (G.2) Ability to provision VM compute resources with multiple dataplane interfaces
     409 * (G.3) Ability to get layer two network access to send/receive traffic.
     410 * (G.4) Ability to run single experiments using single VMs accessing a single interface to access remote racks.
     411 * (G.4) Ability to run single experiments using multiple VMs accessing a single interface (one, several, and all VM at once) to access multiple remote racks.
     412 * (G.4) Ability to run multiple experiments using multiple VMs accessing a single interface (one, several, and all VM at once) to access multiple remote racks.
     413 * (G.4) Ability to handle traffic as expected across VLANs for each scenario.
     414 * (D.6.a) Ability to monitor network devices CPU and memory utilization.
     415 * (D.6.c) Ability to view status of power utilization.
     416 * (D.6.c) Ability to view control network reachability for all infrastructure devices (KVMs, PDUs, etc)
     417 * (D.6.c) Ability to view control network reachability for commodity internet
     418 * (D.6.c) Ability to view control network reachability for GENI data plane
     419
     420=== ExoGENI Multi-site !OpenFlow Acceptance Test - Use Case 5 ===
     421
     422 * (D.6.a) Ability to monitor !OpenFlow configurations.
     423 * (D.6.a) Ability to monitor !OpenFlow Status.
     424 * (D.5.d) Ability to monitor !OpenFlow health checks that are minimally run hourly.
     425 * (C.2.f) Ability to run multiple !OpenFlow controllers to control multiple Network Resources in one rack.
     426 * (C.2.f) Ability to manage OF rack resources in a multi-site scenario. 
     427 * (C.2.f) Ability of administrative functions to show which controllers are associated with the OF Switch.
     428
     429
     430=== Emergency Stop Acceptance test Requirements Mapping ===
     431
     432 - (C.3.d) Verify Emergency Stop participants and stakeholders are identified
     433 - (C.3.d) Verify Emergency Stop Triggers has been identified and implemented.
     434 - (C.3.d) Verify existence of GENI Operational Contact Lists, may differ by rack type?
     435 - (C.3.d) Verify Security implications for the emergency stop.
     436 - (C.3.d) Verify that a correlation exists between requests and aggregates.
     437 - (C.3.d) Verify that response time expectations are defined for severe and urgent cases
     438 - (C.3.d) Verify that an escalation path is defined for initial escalation and quarantine.
     439 - (C.3.d) Verify that response time  expectations are defined for the issue reporter"
     440
     441
     442== Requirements not verified ==
     443
     444The following requirement is not verified in this plan.   No plan exists for validating at this time, when a plan will be created it will be executed in the GPO Lab.
     445
     446 - (C.2.a) "Support at least 100 simultaneous active (e.g. actually passing data) layer 2 Ethernet VLAN connections to the rack. For this purpose, VLAN paths must terminate on separate  rack VMs, not on the rack switch."
     447
     448
     449= Glossary =
     450Following is a glossary for terminology used in this plan, for additional terminology definition see the [http://groups.geni.net/geni/wiki/GeniGlossary GENI Glossary] page.
     451 * Account type:
     452   * Experimenter: a person accessing the rack using a GENI credential and the GENI AM API
     453   * Administrator: a person who has fully-privileged access to, and responsibility for, the rack infrastructure (servers, network devices, etc) at a given location
     454   * Operator: a person who has unprivileged/partially-privileged access to the rack infrastructure at a given location, and has responsibility for one or a few particular functions.
     455
     456 * Baseline Monitoring: Set of monitoring functions which show aggregate health for VMs and switches and their interface status, traffic counts for interfaces and VLANs. Includes resource availability and utilization.
     457
     458 * Experimental compute resources:
     459   * VM: an experimental compute resource which is a virtual machine located on a physical machine in the rack.
     460   * bare-metal node: an experimental compute resource which is a physical machine usable by experimenters without virtualization.
     461   * compute resource: either a VM or a bare metal node. 
     462
     463  * Experimental network resources:
     464   * Static VLAN: The VLAN is provisioned entirely out of band. Admins set up the VLAN manually; experimenters must know the VLAN ID and request connection to it from the rack AM(s).
     465   * Pool VLAN: The VLAN is provisioned dynamically from a pool of manually pre-allocated VLANs.  Admins set up the pool, and configure the VLAN IDs into the rack AM(s).  Experimenters do not specify a VLAN ID in their requests.
     466   * Dynamic VLAN: The VLAN is provisioned dynamically everywhere that it exists. Admins don't do any out of band setup work; experimenters do not specify a VLAN ID in their requests.