wiki:GENIRacksHome/AcceptanceTests/ExogeniAcceptanceTestsPlan

Version 11 (modified by lnevers@bbn.com, 12 years ago) (diff)

--

ExoGENI Acceptance Test Plan

This page captures the GENI Racks Acceptance Test Plan to be executed for the ExoGENI project. Tests in this plan are based on the ExoGENI Use Cases and each test provides a mapping to the GENI Rack Requirements that it validates. This plan defines tests that cover the following types of requirements: Integration, Monitoring and Experimenter requirements. The GPO Software team has implemented the GENI AM API Acceptance tests suite to verify Software requirements which are not covered in this plan. The GPO Infrastructure team is responsible for performing the tests described in this page. Some tests on this page are placeholders, and will be more fully defined soon. The tests found here are all high-priority functional tests. The ExoGENI racks support more functions than the priority ones in these system tests.

Assumptions

The following environment prerequisites are are assumed for all tests described in this plan:

  • GPO ProtoGENI credentials from https://pgeni.gpolab.bbn.com are used for all tests.
  • GPO ProtoGENI is the Slice Authority for all tests.
  • Resources for each test will be requested from the local broker whenever possible.
  • The ORCA Actor Registry is functional, and allows ExoGENI racks to communicate.
  • ORCA RSpec/NDL conversion service is available to convert GENI requests.

The following runtime assumptions are made for all test cases:

  • Compute resources are VMs unless otherwise stated.
  • All Aggregate Manager (AM) requests are made via the omni command line tool which is part of the GENI Control Framework (GCF) package.
  • In all scenarios, one experiment is always equal to one slice.
  • FOAM is used as an aggregate manager in each of the OpenFlow test cases.
  • NLR is used for Layer 2 Static VLANs, although, if available, it it preferable to run this test over Internet2.

Test Traffic Profile:

  • Experiment traffic includes UDP and TCP data streams at low rates to ensure end-to-end delivery
  • Traffic exchanged is used to verify that the appropriate channel is used for the test described.
  • Performance is not a goal of these acceptance tests.

Acceptance Tests Descriptions

This section describes each acceptance test by defining goals, topology used, and outlining test procedure. Test cases are listed by priority in sections below; the cases that verify the largest number of requirement criteria are typically listed at a higher priority. The baseline tests are executed first to verify that monitoring and administrative functions are available; this will allow the execution of the use-case-based test topologies. Additional monitoring and administrative tests are described in later sections which will be run before the completion of the acceptance test effort.

Administration Baseline Acceptance Test

Administrative Acceptance tests will verify support of administrative management tasks and focus on verifying priority functions for each of the rack components. The set of administrative features described in this section will be verified initially. Additional administrative test are described in a later section which will be executed before the acceptance test completion.

Prerequisite

  • Administrative accounts are available for GPO staff on the GPO ExoGENI rack
  • Procedures or tools are available for changing IP addresses for all addressable rack components.

Procedure

Administrative tasks:

  1. As Administrator execute root/admin commands, and create accounts for supported user types (admin, oper, user)
  2. As Operator (just created) execute root/admin commands and create two user accounts.
  3. As User (just created) make sure that the following are not allowed: admin/operator commands and administrative tools.
  4. As Administrator, identify all software versions and view all configurations running on all GENI rack components
  5. Verify that access to rack components with ssh password is not allowed, nor for via any unencrypted login protocol.
  6. Verify ssh public-key access to all rack components that support ssh.
  7. As Administrator delete user and operator accounts created in steps above.
  8. Verify site administrator has local and remote console access to each rack component.
  9. Verify remote power cycling for each rack component.
  10. Verify ability to determine MAC address for all physical host interfaces, all network device interfaces, all active experimental VMs, and all recently-terminated experimental VMs.

Validate bare-metal support using available procedures:

  1. Use available administrative function to determine which nodes can be used as bare-metal.
  2. Verify ability to reserve and boot a bare-metal running Microsoft Windows
  3. Release bare-metal resource.
  4. View a list of OS images which can be loaded on bare-metal nodes.
  5. Modify resource allocation to add 1 additional bare-metal (2 total) for use in acceptance test.

Validate Baseline Policy:

  1. Verify ability for site administrator to set a policy for a Network Resource (not OpenFlow) to allow anyone to access resource.
  2. Verify ability for site administrator to set a policy for a Network Resource (not OpenFlow) to ban a user from accessing resources.

Validate Logging features:

  1. Review log and reports generated during user creation tests to verify user counts reflect expectations.
  2. Verify log rolling is available
  3. Verify remote logging is available.
  4. Verify slice and sliver (AM transactions) can be part of remote logging.
  5. Verify remote logging is possible before roll over log deletion.
  6. Verify logging show all account types (admin, oper, and user) loggin into the rack.
  7. Verify logging captures sudo access.

Validate Security:

  1. Monitor for software updates and patches that occur as part of the testing by reviewing logging and configuration management.
  2. Record turnaround time for OpenFlow, Compute Resources and Network Resources updates.
  3. Monitor vulnerability alerts from Rack Team.

Monitoring Baseline Acceptance Test

The Monitoring Baseline acceptance is the minimum monitoring required to observe rack components. Baseline monitoring is executed within each of the tests that are based on use cases. Additional monitoring tests are define in a later section to complete the validation in this section.

Prerequisite

  • Access to Nagios statistics in the rack is available to GPO staff.

Procedure

Verify the following monitoring features are available by accessing Nagios or whatever source is gathering statistics. Note, that this is an initial approach and as evaluation progresses, evaluation will transition to reviewing this data via monitoring systems at GPO and/or at GMOC.

  1. Aggregate availability via the AM API.
  2. State of available resources (in use, available, down/unknown)
  3. Overall sliver count
  4. Resource utilization level on the aggregate
  5. Status and utilization of each sliver active on the aggregate (minimum: sliver uptime, sliver resource utilization, performance data as available)
  6. Network devices by reviewing device liveness.
  7. Interface traffic counters (including types of traffic e.g. broadcast/multicast)
  8. List of VLANs defined on interfaces, MAC address tables on data plane VLANs
  9. At each available ExoGENI site, access the static (always-available) interface to verify Meso-scale connection.

ExoGENI Single Site Acceptance Test - Use Case 1

This is a one site test run on the GPO ExoGENI rack, and it includes three experiments. Each experiment requests local compute resources which generate bidirectional traffic over a Layer 2 dataplane network connection. The experiment is executed in two parts: Part 1 sets up two concurrent experiments, and Part 2 sets up an experiment that validates compute resource limits.

Test Topology

This test uses this topology:

Prerequisites

This test has these prerequisites:

  • A GPO Ubuntu image has been uploaded to the RENCI image repository for use in this test.
  • Traffic generation tools may be part of image or installed at experiment runtime.
  • Administrative accounts have been created for GPO staff on the GPO rack.
  • Create Experimenter account to be used in test.
  • Baseline Monitoring is in place for the entire GPO site, to ensure that any problems are quickly identified.

Procedure

For Part 1, do the following:

  1. As Experimenter1, request ListResources from GPO ExoGENI.
  2. Review ListResources output, and identify available resources.
  3. Review images available at RENCI image repository.
  4. Define a request RSpec for two VMs, each with a CentOS image.
  5. Create the first slice.
  6. Create a sliver in the first slice, using the RSpec defined in step 4.
  7. Log in to each of the systems, and send traffic to the other system.
  8. As Experimenter2, request ListResources from GPO ExoGENI.
  9. Define a request RSpec for two bare-metal nodes, both using the image uploaded by GPO.
  10. Create the second slice.
  11. Create a sliver in the second slice, using the RSpec defined in step 8.
  12. Log in to each of the systems, and send traffic to the other system.
  13. Verify that experimenters 1 and 2 cannot use the control plane to access each other's resources (e.g. via unauthenticated SSH, shared writable filesystem mount)
  14. Delete slivers.

For Part 2, do the following:

  1. Request ListResources from GPO ExoGENI.
  2. Write an RSpec that requests 100 VMs.
  3. Create a new slice.
  4. Create a sliver in the new slice, using the RSpec defined in step 2.
  5. Log into several of the VMs, and send traffic to several other systems.
  6. Step up traffic rates to verify VMs continue to operate with realistic traffic loads.
  7. Review system statistics and VM isolation (does not include network isolation)
  8. Review monitoring statistics and check for resource status for CPU, disk, memory utilization, interface counters, uptime, process counts, and active user counts.
  9. Verify that each VM has a distinct MAC address for that interface 12 Verify that VMs' MAC addresses are learned on the dataplane switch
    1. Delete sliver.

ExoGENI Multi-site Acceptance Test - Use Case 2

This test includes two sites and two experiments, using resources in the GPO and RENCI ExoGENI racks. Each of the compute resources will exchange traffic with the others in its slice, over a wide-area Layer 2 dataplane network connection, using NLR and/or Internet2 Static VLANs.

Test Topology

This test uses this topology:

Prerequisites

This test has these prerequisites:

  • GPO ExoGENI connectivity statistics will be monitored at the GPO ExoGENI Monitoring site.
  • This test will be scheduled at a time when site contacts are available to address any problems.
  • Administrative accounts have been created for GPO staff at each rack.
  • The VLANs used will be selected from the pool of VLANs pre-allocated to ExoGENI by NLR and/or Internet2.
  • Baseline Monitoring is in place at each site, to ensure that any problems are quickly identified.
  • ExoGENI manages private address allocation for the OpenFlow endpoint in this test except for the ones that are not ExoGENI.

Procedure

Do the following:

  1. Request ListResources from GPO ExoGENI.
  2. Request ListResources from RENCI ExoGENI.
  3. Review ListResources output from both AMs.
  4. Define a request RSpec for a VM at GPO ExoGENI.
  5. Define a request RSpec for a VM at RENCI ExoGENI.
  6. Create the first slice.
  7. Create a sliver at each ExoGENI aggregate using the RSpecs defined above.
  8. Log in to each of the systems, and send traffic to the other system, leave traffic running.
  9. Request ListResources from GPO ExoGENI and RENCI ExoGENI.
  10. Define a request RSpec for one VM and one bare-metal node running at GPO ExoGENI.
  11. Define a request RSpec for two VMs at RENCI ExoGENI.
  12. Create a second slice.
  13. In the second slice, create a sliver at each ExoGENI aggregate using the RSpecs defined above.
  14. Log in to each of the systems, and send traffic to the other system.
  15. Verify traffic handling, VM isolation, and address assignment.
  16. Review baseline monitoring statistics.
  17. Run test for at least 4 hours.
  18. Review baseline monitoring statistics.
  19. Delete slivers.

Note: After initial successful test run, this test will be revisited and will be re-run as a longevity test for a minimum of 24 hours.

ExoGENI Multi-site OpenFlow Acceptance Test - Use Case 5

This test includes two sites and two experiments, using resources in the GPO and RENCI ExoGENI racks, where the network resources are the core OpenFlow-controlled VLANs. Each of the compute resources will exchange traffic with the others in its slice, over a wide-area Layer 2 dataplane network connection, using NLR and/or Internet2 VLANs.

Test Topology

This test uses this topology:

Prerequisites

This test has these prerequisites:

  • GPO ExoGENI connectivity statistics will be monitored at the GPO ExoGENI Monitoring site.
  • This test will be scheduled at a time when site contacts are available to address any problems.
  • The VLANs used will be selected from the pool of VLANs pre-allocated to ExoGENI by NLR and/or Internet2.
  • Site has delegated VLAN Range needed to run this test.
  • Baseline Monitoring is in place at each site, to ensure that any problems are quickly identified.
  • Meso-scale site is available for testing (If PG access to OF is available, PG will be used in place of a Meso-scale OF site.)
  • If Meso-scale site is used, then GMOC must be able to get monitoring data for experiment.

Procedure

Do the following:

  1. Request ListResources from GPO ExoGENI, RENCI ExoGENI, and FOAM at NLR and/or Internet2.
  2. Review ListResources output from all AMs.
  3. Define a request RSpec for a VM at the GPO ExoGENI.
  4. Define a request RSpec for a VM at the RENCI ExoGENI.
  5. Define request RSpecs for OpenFlow resources from NLR and/or Internet2.
  6. Create the first slice.
  7. Create a sliver in the first slice at each AM, using the RSpecs defined above.
  8. Log in to each of the systems, and send traffic to the other system, leave traffic running.
  9. Define a request RSpec for two VMs at GPO ExoGENI.
  10. Define a request RSpec for two VMs at RENCI ExoGENI.
  11. Create a second slice.
  12. Create a sliver in the second slice at each AM, using the RSpecs defined above.
  13. Log in to each of the systems in the slice, and send traffic to each other systems; leave traffic running
  14. Request ListResources from GPO ExoGENI and FOAM at Meso-scale site.
  15. Review ListResources output from all AMs.
  16. Define a request RSpec for a VM at the GPO ExoGENI.
  17. Define a request RSpec for a compute resource at a Meso-scale site.
  18. Define request RSpecs for OpenFlow resources to allow connection from OF GPO ExoGENI to Meso-scale OF site.
  19. Create a third slice
  20. Create a sliver that connects the Meso-scale OpenFlow site to the GPO ExoGENI Site.
  21. Log in to each of the systems in the slice, and send traffic to each other systems; leave traffic running
  22. Verify that all three experiment continue to run without impacting traffic, and that data is exchanged over the OpenFlow datapath defined in RSpec.
  23. Review baseline monitoring statistics and checks
  24. As admin, verify ability to map all controllers associated to the OpenFlow switch.
  25. Run test for at least 4 hours.
  26. Review monitoring statistics and checks as above.
  27. Delete all slivers at each ExoGENI AMs and at FOAM AMs.

ExoGENI and Meso-scale OpenFLow Interoperability Acceptance Test - Use Case 4

A two site experiment including OpenFlow resources and Compute Resources at the GPO ExoGENI rack and a Meso-scale site (TBD). Experiment will request a compute resource (VM) at each site to exchange bidirectional traffic. Requests be made for network resources at each FOAM Aggregate to allow traffic exchange over OpenFlow VLANs. This is a low-priority test that will be executed if time permits.

Test Topology

No image "ExoGENIMesoscaleOpenFlowAcceptanceTest.jpg" attached to GENIRacksHome/AcceptanceTests/ExogeniAcceptanceTestsPlan

Prerequisites

  • Baseline Monitoring is in place at each of the site to ensure that any potential problems are quickly identified.
  • GPO ExoGENI connectivity statistics will be monitored at the GPO ExoGENI Monitoring site.
  • GMOC is receiving monitoring data for ExoGENI Racks.
  • This test will be scheduled to ensure that resource available and site contact is available in case of potential problems.
  • VLANs to be used are from the pre-allocated pool of ExoGENI VLANs.

Procedure

The following operations are to be executed:

  1. Request ListResources at the GPO ExoGENI AM.
  2. Request ListResources at the GPO FOAM AM.
  3. Request ListResources at the Meso-scale AM.
  4. Request ListResources at the Meso-scale FOAM AM.
  5. Review ListResources output from all aggregates.
  6. Define a request RSpec for one Compute Resource at the GPO ExoGENI AM.
  7. Define a request RSpec for one Compute Resource at the Meso-scale AM.
  8. Define a request RSpec for Network Resource at the GPO FOAM AM.
  9. Define a request RSpec for Network Resource at the Meso-scale FOAM AM.
  10. Create a slice
  11. Create a sliver at each ExoGENI and Meso-scale AMs using the RSpecs defined above.
  12. Create a sliver at each FOAM AM using the RSpecs defined above.
  13. Log in to each of the Compute Resources and send traffic to the other Host.
  14. Verify data is exchanged over OpenFlow channel.
  15. Delete sliver at each ExoGENI AMs and at FOAM AMs.
  16. Delete slice (if supported).

Additional Administration Acceptance Tests

The following operations are to be executed:

  1. Review tools or procedures for change IP for each of the rack component types.
  2. Change IP addresses for each switch, VM, Bare Metal node, etc.
  3. View source code for any software covered by the GENI Intellectual Property Agreement that runs on the rack. Should be able to determine location of such source code in their public site documentation from the Rack Development Team.

Additional Monitoring Acceptance Tests

Upon completion of the acceptance phase, the GPO will verify that all information available in the Baseline Monitoring Acceptance tests is available via the Operational monitoring portal at gmoc-db.grnoc.iu.edu. In addition to the baseline monitoring tests, the following features will be verified:

  1. Operational monitoring data for the rack is available at gmoc-db.grnoc.iu.edu.
  2. The rack data's "site" tag in the GMOC database indicates the physical location (e.g. host campus) of the rack.
  3. Whenever the rack is operational, GMOC's database contains site data which is at most 10 minutes old.
  4. Any site variable which can be collected by reading a counter (i.e. which does not require system or network processing beyond a file read) is collected at least once a minute.
  5. All hosts which submit data to gmoc-db have system clocks which agree with gmoc-db's clock to within 45 seconds. (GMOC is responsible for ensuring that gmoc-db's own clock is synchronized to an accurate time source.)
  6. The GMOC database contains data about whether each site AM has recently been reachable via the GENI AM API.
  7. The GMOC database contains data about the recent uptime and availability of each compute or pool VLAN resource at each rack AM.
  8. The GMOC database contains the sliver count and percentage of resources in use at each rack AM.
  9. The GMOC database contains the creation time of each sliver on each rack AM.
  10. If possible, the GMOC database contains per-sliver interface counters for each rack AM.
  11. The GMOC database contains data about whether each rack dataplane switch has recently been online.
  12. The GMOC database contains recent traffic counters and VLAN memberships for each rack dataplane switch interface.
  13. The GMOC database contains recent MAC address table contents for static VLANs which appear on rack dataplane switches
  14. The GMOC database contains data about whether each experimental VM server has recently been online.
  15. The GMOC database contain overall CPU, disk, and memory utilization, and VM count and capacity, for each experimental VM server.
  16. The GMOC database contains overall interface counters for experimental VM server dataplane interfaces.
  17. The GMOC database contains recent results of at least one end-to-end health check which simulates an experimenter reserving and using at least one resource in the rack.
  18. A site administrator can locate current and recent CPU and memory utilization for each rack network device, and can find recent changes or errors in a log.
  19. A site administrator can locate current configuration of flowvisor, FOAM, and any other OpenFlow services, and find logs of recent activity and changes.
  20. For each infrastructure and experimental host, a site administrator can locate current and recent uptime, CPU, disk, and memory utilization, interface traffic counters, process counts, and active user counts.
  21. A site administrator can locate recent syslogs for all infrastructure and experimental hosts.
  22. A site administrator can locate information about the network reachability of all rack infrastructure which should live on the control network, and can get alerts when any rack infrastructure control IP becomes unavailable from the rack server host, or when the rack server host cannot reach the commodity internet.
  23. A site administrator can get information about the power utilization of rack PDUs.
  24. Given a public IP address and port, a pool VLAN, or a sliver name, a site administrator or GMOC staffer can identify the email address of the experimenter who controlled that resource at a particular time.
  25. For trending purposes, per-rack or per-aggregate summaries are collected of the count of distinct users who have been active on a given rack. Racks may provide raw sliver/user data to GMOC, or may produce their own trending summaries on demand.
  26. Meso-scale reachability testing can report on the recent liveness of the rack static VLANs by pinging a per-rack IP in each Meso-scale monitoring subnet.

Emergency Stop Acceptance test

Prerequisite

  • GMOC delivers updated Emergency Stop procedure document. [add_link_when_available]
  • GPO writes a generic Emergency Stop procedure, based on the updated GMOC procedure. [add_link_when_available]
  • Emergency stop coordination has taken place to make sure that all site are aware of their role and that all expected steps are documented.

Procedure

GPO executes the generic Emergency Stop procedure.

Requirements Validation

This section maps test cases to individual acceptance test criteria. It also documents the requirements that are not validated.

Validated Requirements Mappings

Administration Acceptance Test Requirements Mapping

  • (C.1.d) Ability to support Microsoft Windows on bare-metal node.
  • (C.3.a) Ability of Administrator account type to add/delete/modify 2 or more accounts.
  • (C.3.a) Ability of Operator account type to add/delete/modify for 2 or more accounts.
  • (C.3.a) Ability of User account type to be added/deleted/modified for 2 or more accounts.
  • (C.3.a) Ability of Administrator accounts have super user privileges on all rack node types (network, compute, misc.)
  • (C.3.a) Ability of Operator accounts have privileges on all rack node type (network, compute, misc.) to access to common operator functions such as: debug tools, emergency stop,<TBD>.
  • (C.3.a) User account types do no have access to Administrative functions.
  • (C.3.a) User accounts do not have access to Operator functions such as: debug tools, emergency stop, <TBD>
  • (C.3.a) Ability of Administrator, Operator, and User account types to use secure ssh Keys rather than password.
  • (C.3.b) Account access is not allowed for username/password control.
  • (C.3.b) Ability to support account access via ssh keys.
  • (C.3.c) Ability to remote power cycle rack components.
  • (C.3.e) Verify procedures and/or tools provided for changing IP addresses for all rack components types.
  • (C.3.f) Verify ability to remotely determine MAC addresses for all rack resources (including VMs)
  • (F.5) Ability of site support staff (and GENI operations) must be able to identify all software versions and view all configurations running on all GENI rack components once they are deployed. The rack users' experimental software running on the rack is exempt from this requirement.
  • (F.6) Ability of site support staff (and GENI operations) must be able to view source code for any software covered by the GENI Intellectual Property Agreement that runs on the rack. Rack teams should document the location of such source code in their public site documentation (e.g. on the GENI wiki).
  • (F.6)

Monitoring Acceptance Test Requirements Mapping

  • (D.5.a) Ability to provide aggregate health via AM API.
  • (D.5.a) Ability to get resource types, counts, states and utilization.
  • (D.5.a) Ability to get the following for active slivers: uptime, resource utilization, performance data.
  • (D.5.b) Ability to get network device status, traffic counters, traffic type.
  • (D.5.b) Ability to get network device VLANs by interface, MAC address tables for VLANs.
  • (D.8) Each rack has always-available interface on Meso-scale VLAN.

ExoGENI Single Site Acceptance Test - Use Case 1

  • (C.1.a) Ability to operate the advertised minium number of hosts in a rack simultaneously in multiple experiments.
  • (C.1.b) Ability to support at least one bare-metal node using a supported Linux OS.
  • (C.1.c) Ability to support multiple VMs simultaneously in a single rack.
  • (C.1.c) Ability to support multiple VMs and bare-metal nodes simultaneously in a single rack.
  • (C.1.c) Ability to support multiple bare-metal nodes simultaneously in a single rack.
  • (C.3.b) Ability to support account access via SSH keys.
  • (D.5.c) Ability to monitor VM status for CPU, disk, memory utilization.
  • (D.5.c) Ability to monitor VM interface counters.
  • (D.6.b) Ability to monitor VMs: CPU, disk, and memory utilization interface traffic counters, uptime, process counts, and active user counts.
  • (D.6.b) Ability to monitor bare-metal nodes: CPU, disk, and memory utilization interface traffic counters, uptime, process counts, and active user counts.
  • (D.7) Ability of logging and reporting to capture active user counts per rack.
  • (G.1) Ability to get VMs with root/admin capabilities.
  • (G.1) Ability to get bare-metal nodes with root/admin capabilities.
  • (B.5) Support at least two different operating systems for compute resources
  • (B.5.a) Provide images for experimenters
  • (B.5.b) Advertise image availability in the advertisement RSpec

ExoGENI Multi-site Acceptance Test - Use Case 2

  • (C.1.a) Ability to isolate simultaneously used resources in an experiment.
  • (C.2.b) Ability to connect a single external VLAN to multiple VMs in the rack
  • (C.2.b) Ability to connect multiple external VLANs to multiple VMs in the rack
  • (C.2.c) Ability to simultaneously connect a single external VLAN to multiple VMs and bare-metal node.
  • (C.2.d) Ability to have unique addressing when multiple experiments are running.
  • (G.2) Ability to provision VM compute resources with multiple dataplane interfaces
  • (G.3) Ability to get layer two network access to send/receive traffic.
  • (G.4) Ability to run single experiments using single VMs accessing a single interface to access remote racks.
  • (G.4) Ability to run single experiments using multiple VMs accessing a single interface (one, several, and all VM at once) to access multiple remote racks.
  • (G.4) Ability to run multiple experiments using multiple VMs accessing a single interface (one, several, and all VM at once) to access multiple remote racks.
  • (G.4) Ability to handle traffic as expected across VLANs for each scenario.
  • (D.6.a) Ability to monitor network devices CPU and memory utilization.
  • (D.6.c) Ability to view status of power utilization.
  • (D.6.c) Ability to view control network reachability for all infrastructure devices (KVMs, PDUs, etc)
  • (D.6.c) Ability to view control network reachability for commodity internet
  • (D.6.c) Ability to view control network reachability for GENI data plane

ExoGENI Multi-site OpenFlow Acceptance Test - Use Case 5

  • (D.6.a) Ability to monitor OpenFlow configurations.
  • (D.6.a) Ability to monitor OpenFlow Status.
  • (D.5.d) Ability to monitor OpenFlow health checks that are minimally run hourly.
  • (C.2.f) Ability to run multiple OpenFlow controllers to control multiple Network Resources in one rack.
  • (C.2.f) Ability to manage OF rack resources in a multi-site scenario.
  • (C.2.f) Ability of administrative functions to show which controllers are associated with the OF Switch.

Emergency Stop Acceptance test Requirements Mapping

  • (C.3.d) Verify Emergency Stop participants and stakeholders are identified
  • (C.3.d) Verify Emergency Stop Triggers has been identified and implemented.
  • (C.3.d) Verify existence of GENI Operational Contact Lists, may differ by rack type?
  • (C.3.d) Verify Security implications for the emergency stop.
  • (C.3.d) Verify that a correlation exists between requests and aggregates.
  • (C.3.d) Verify that response time expectations are defined for severe and urgent cases
  • (C.3.d) Verify that an escalation path is defined for initial escalation and quarantine.
  • (C.3.d) Verify that response time expectations are defined for the issue reporter"

Requirements not verified

The following requirement is not verified in this plan. No plan exists for validating at this time, when a plan will be created it will be executed in the GPO Lab.

  • (C.2.a) "Support at least 100 simultaneous active (e.g. actually passing data) layer 2 Ethernet VLAN connections to the rack. For this purpose, VLAN paths must terminate on separate rack VMs, not on the rack switch."

Glossary

Following is a glossary for terminology used in this plan, for additional terminology definition see the GENI Glossary page.

  • Account type:
    • Experimenter: a person accessing the rack using a GENI credential and the GENI AM API
    • Administrator: a person who has fully-privileged access to, and responsibility for, the rack infrastructure (servers, network devices, etc) at a given location
    • Operator: a person who has unprivileged/partially-privileged access to the rack infrastructure at a given location, and has responsibility for one or a few particular functions.
  • Baseline Monitoring: Set of monitoring functions which show aggregate health for VMs and switches and their interface status, traffic counts for interfaces and VLANs. Includes resource availability and utilization.
  • Experimental compute resources:
    • VM: an experimental compute resource which is a virtual machine located on a physical machine in the rack.
    • bare-metal node: an experimental compute resource which is a physical machine usable by experimenters without virtualization.
    • compute resource: either a VM or a bare metal node.
  • Experimental network resources:
    • Static VLAN: The VLAN is provisioned entirely out of band. Admins set up the VLAN manually; experimenters must know the VLAN ID and request connection to it from the rack AM(s).
    • Pool VLAN: The VLAN is provisioned dynamically from a pool of manually pre-allocated VLANs. Admins set up the pool, and configure the VLAN IDs into the rack AM(s). Experimenters do not specify a VLAN ID in their requests.
    • Dynamic VLAN: The VLAN is provisioned dynamically everywhere that it exists. Admins don't do any out of band setup work; experimenters do not specify a VLAN ID in their requests.

Attachments (5)

Download all attachments as: .zip