Changes between Version 34 and Version 35 of GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-ADM-1
- Timestamp:
- 12/20/12 16:55:41 (11 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-ADM-1
v34 v35 5 5 ''This page is GPO's working page for performing IG-ADM-1. It is public for informational purposes, but it is not an official status report. See [wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus] for the current status of InstaGENI acceptance tests.'' 6 6 7 ''Last substantive edit of this page: 2012-1 1-12''8 9 = = Page format ==7 ''Last substantive edit of this page: 2012-12-20'' 8 9 = Page format = 10 10 11 11 * The status chart summarizes the state of this test 12 12 * The high-level description from test plan contains text copied exactly from the public test plan and acceptance criteria pages. 13 * The steps contain things iwill actually do/verify:14 * Steps may be composed of related substeps where i find this useful for clarity13 * The steps contain things I will actually do/verify: 14 * Steps may be composed of related substeps where I find this useful for clarity 15 15 * Each step is identified as either "(prep)" or "(verify)": 16 16 * Prep steps are just things we have to do. They're not tests of the rack, but are prerequisites for subsequent verification steps 17 17 * Verify steps are steps in which we will actually look at rack output and make sure it is as expected. They contain a '''Using:''' block, which lists the steps to run the verification, and an '''Expect:''' block which lists what outcome is expected for the test to pass. 18 18 19 == Status of test == 20 21 See [wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus] for the meanings of test states. 22 23 ''Note: all steps of this test are blocked on the arrival of the BBN rack at BBN. However, we plan to do preliminary testing of some steps using the rack at Utah. For the time being, we need to differentiate between steps which are blocked until the BBN rack arrives, and steps which may be blocked from testing at Utah by some shorter-term requirement, as follows:'' 24 * [[Color(yellow,Blocked-site)]]: A step which will not be tested on the Utah rack, and is blocked until the BBN site rack arrives. 25 * [[Color(orange,Blocked-Utah)]]: A step which will be tested on the Utah rack, and is blocked on a requirement for access or configuration of the Utah rack. 26 27 || '''Step''' || '''State''' || '''Date completed''' || '''Open Tickets''' || '''Closed Tickets/Comments''' || 28 || 1 || [[Color(yellow,Blocked-site)]] || || || blocked on purchase and shipping of BBN rack || 29 || 2A || [[Color(yellow,Blocked-site)]] || || [instaticket:23] || blocked on 1; i've done no testing of this yet, but opened a ticket anyway based on a DNS mismatch i found on the Utah rack || 30 || 2B || [[Color(yellow,Blocked-site)]] || || || blocked on 2A || 31 || 2C || [[Color(yellow,Blocked-site)]] || || || blocked on 2B || 32 || 3A || [[Color(yellow,Complete)]] || || || requires re-testing after 1 is completed || 33 || 3B || [[Color(yellow,Complete)]] || || || requires re-testing after 1 is completed || 34 || 3C || [[Color(yellow,Complete)]] || || || ([instaticket:42]) requires re-testing after 1 is completed || 35 || 3D || [[Color(yellow,Complete)]] || || || requires re-testing after 1 is completed || 36 || 3E || [[Color(yellow,Complete)]] || || || requires re-testing after 1 is completed || 37 || 3F || [[Color(yellow,Complete)]] || || || requires re-testing after 1 is completed || 38 || 3G || [[Color(yellow,Complete)]] || || || ([instaticket:21]) the SSL (issuer,serial) collision problem described in [instaticket:21] will be a problem for site admins who use Firefox; requires re-testing after 1 is completed || 39 || 3H || [[Color(yellow,Complete)]] || || || requires re-testing after 1 is completed || 40 || 4A || [[Color(yellow,Blocked-site)]] || || || blocked on 1 || 41 || 4B || [[Color(yellow,Blocked-site)]] || || || blocked on 1 || 42 || 4C || [[Color(yellow,Blocked-site)]] || || || blocked on 1 || 43 || 4D || [[Color(yellow,Blocked-site)]] || || || blocked on 1 || 44 || 5A || [[Color(yellow,Complete)]] || || [instaticket:23] || may need to be revisited pending resolution of control host's hostname; blocked on 1 for testing of private switch IPs || 45 || 5B || [[Color(yellow,Complete)]] || || || requires re-testing after Utah mesoscale dataplane is connected via UEN; requires re-testing after 1 is completed || 46 || 5C || [[Color(red,Fail)]] || 2012-05-27 || || the InstaGENI model does not include a "rack health" high-level UI || 47 || 5D || [[Color(yellow,Complete)]] || || || requires re-testing after 1 is completed || 48 || 6A || [[Color(green,Pass)]] || || || ([instaticket:43]) || 49 || 6B || [[Color(green,Pass)]] || || || || 50 || 6C || [[Color(yellow,Complete)]] || || || interim notifications during the test period have been agreed on; revisit longer-term plan for notifications later || 51 52 == High-level description from test plan == 19 = Status of test = 20 21 ''Note: As of Version 35 of this page, this status chart only refers to the BBN rack. We did some of these tests on the Utah rack, which are still documented below, but they're no longer part of this table.'' 22 23 See [wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus#Legend] for the meanings of test states. 24 25 || '''Step''' || '''State''' || '''Date completed''' || '''Open Tickets''' || '''Closed Tickets/Comments''' || 26 || 1 || [[Color(green,Pass)]] || || || || 27 || 2A || [[Color(green,Pass)]] || || || || 28 || 2B || [[Color(green,Pass)]] || || || || 29 || 2C || [[Color(green,Pass)]] || || || || 30 || 3A || [[Color(green,Pass)]] || || || || 31 || 3B || [[Color(green,Pass)]] || || || || 32 || 3C || [[Color(green,Pass)]] || || || || 33 || 3D || [[Color(green,Pass)]] || || || || 34 || 3E || [[Color(green,Pass)]] || || || || 35 || 3F || [[Color(green,Pass)]] || || || || 36 || 3G || [[Color(#63B8FF,In Progress)]] || || || || 37 || 3H || [[Color(#63B8FF,In Progress)]] || || || || 38 || 4A || [[Color(#63B8FF,In Progress)]] || || || || 39 || 4B || [[Color(#63B8FF,In Progress)]] || || || || 40 || 4C || [[Color(#63B8FF,In Progress)]] || || || || 41 || 4D || [[Color(#63B8FF,In Progress)]] || || || || 42 || 5A || [[Color(#63B8FF,In Progress)]] || || || || 43 || 5B || [[Color(#63B8FF,In Progress)]] || || || || 44 || 5C || [[Color(#63B8FF,In Progress)]] || || || || 45 || 5D || [[Color(#63B8FF,In Progress)]] || || || || 46 || 6A || [[Color(#63B8FF,In Progress)]] || || || || 47 || 6B || [[Color(#63B8FF,In Progress)]] || || || || 48 || 6C || [[Color(#63B8FF,In Progress)]] || || || || 49 50 = High-level description from test plan = 53 51 54 52 This "test" uses BBN as an example site by verifying that we can do all the things we need to do to integrate the rack into our standard local procedures for systems we host. 55 53 56 == = Procedure ===54 == Procedure == 57 55 58 56 * InstaGENI and GPO power and wire the BBN rack … … 63 61 * GPO reviews the documented parts list, power requirements, physical and logical network connectivity requirements, and site administrator community requirements, verifying that these documents should be sufficient for a new site to use when setting up a rack. 64 62 65 == = Criteria to verify as part of this test ===63 == Criteria to verify as part of this test == 66 64 67 65 * VI.02. A public document contains a parts list for each rack. (F.1) … … 75 73 * VII.02. Site administrators can understand the physical power, console, and network wiring of components inside their rack and document this in their preferred per-site way. (F.1) 76 74 77 = = Step 1 (prep): InstaGENI and GPO power and wire the BBN rack ==75 = Step 1 (prep): InstaGENI and GPO power and wire the BBN rack = 78 76 79 77 This step covers the physical delivery of the rack to BBN, the transport of the rack inside BBN to the GPO lab, and the cabling, powering, and initial configuration of the rack. 80 78 81 = = Step 2: Configure and verify DNS ==82 83 == = Step 2A (verify): Find out what IP-to-hostname mapping to use ===79 = Step 2: Configure and verify DNS = 80 81 == Step 2A (verify): Find out what IP-to-hostname mapping to use == 84 82 85 83 '''Using:''' … … 92 90 * Reasonable IP-to-hostname mappings for 126 valid IPs allocated for InstaGENI use in `192.1.242.128/25` 93 91 94 === Step 2B (prep): Insert IP-to-hostname mapping in DNS === 92 === Results of testing step 2A: 2012-12-20 === 93 94 We discussed this via e-mail, and concluded that we should create these DNS entries in gpolab.bbn.com: 95 96 {{{ 97 ;; 192.1.242.128/25: InstaGENI rack 98 99 ; Delegate the whole subdomain to boss.instageni.gpolab.bbn.com, with 100 ; ns.emulab.net as a secondary. 101 ns.instageni IN A 192.1.242.132 102 instageni IN NS ns.instageni 103 instageni IN NS ns.emulab.net. 104 }}} 105 106 And these in 242.1.192.in-addr.arpa: 107 108 {{{ 109 ;; 192.1.242.129/25: instageni.gpolab.bbn.com (InstaGENI rack control network) 110 111 ; Delegate a subdomain to boss.instageni.gpolab.bbn.com, and generate 112 ; CNAMEs pointing to it. 113 129/25 IN NS ns.instageni.gpolab.bbn.com. 114 129/25 IN NS ns.emulab.net. 115 $GENERATE 129-255 $ IN CNAME $.129/25.242.1.192.in-addr.arpa. 116 }}} 117 118 == Step 2B (prep): Insert IP-to-hostname mapping in DNS == 95 119 96 120 * Fully populate `192.1.242.128/25` PTR entries in GPO lab DNS 97 121 * Fully populate `instageni.gpolab.bbn.com` A entries in GPO lab DNS 98 122 99 == = Step 2C (verify): Test all PTR records ===123 == Step 2C (verify): Test all PTR records == 100 124 101 125 '''Using:''' … … 117 141 }}} 118 142 119 == Step 3: GPO requests and receives administrator accounts == 120 121 === Step 3A: GPO requests access to boss and ops nodes === 143 === Results of testing step 2C: 2012-12-20 === 144 145 Many addresses aren't defined: 146 147 {{{ 148 [13:46:15] jbs@anubis:/home/jbs 149 +$ for lastoct in {129..255} ; do host 192.1.242.$lastoct ; done 150 Host 129.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 151 130.242.1.192.in-addr.arpa is an alias for 130.129/25.242.1.192.in-addr.arpa. 152 130.129/25.242.1.192.in-addr.arpa domain name pointer control.instageni.gpolab.bbn.com. 153 131.242.1.192.in-addr.arpa is an alias for 131.129/25.242.1.192.in-addr.arpa. 154 131.129/25.242.1.192.in-addr.arpa domain name pointer control-ilo.instageni.gpolab.bbn.com. 155 132.242.1.192.in-addr.arpa is an alias for 132.129/25.242.1.192.in-addr.arpa. 156 132.129/25.242.1.192.in-addr.arpa domain name pointer boss.instageni.gpolab.bbn.com. 157 133.242.1.192.in-addr.arpa is an alias for 133.129/25.242.1.192.in-addr.arpa. 158 133.129/25.242.1.192.in-addr.arpa domain name pointer ops.instageni.gpolab.bbn.com. 159 134.242.1.192.in-addr.arpa is an alias for 134.129/25.242.1.192.in-addr.arpa. 160 134.129/25.242.1.192.in-addr.arpa domain name pointer foam.instageni.gpolab.bbn.com. 161 135.242.1.192.in-addr.arpa is an alias for 135.129/25.242.1.192.in-addr.arpa. 162 135.129/25.242.1.192.in-addr.arpa domain name pointer flowvisor.instageni.gpolab.bbn.com. 163 Host 136.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 164 Host 137.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 165 Host 138.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 166 Host 139.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 167 140.242.1.192.in-addr.arpa is an alias for 140.129/25.242.1.192.in-addr.arpa. 168 140.129/25.242.1.192.in-addr.arpa domain name pointer pc1.instageni.gpolab.bbn.com. 169 141.242.1.192.in-addr.arpa is an alias for 141.129/25.242.1.192.in-addr.arpa. 170 141.129/25.242.1.192.in-addr.arpa domain name pointer pc2.instageni.gpolab.bbn.com. 171 142.242.1.192.in-addr.arpa is an alias for 142.129/25.242.1.192.in-addr.arpa. 172 142.129/25.242.1.192.in-addr.arpa domain name pointer pc3.instageni.gpolab.bbn.com. 173 143.242.1.192.in-addr.arpa is an alias for 143.129/25.242.1.192.in-addr.arpa. 174 143.129/25.242.1.192.in-addr.arpa domain name pointer pc4.instageni.gpolab.bbn.com. 175 144.242.1.192.in-addr.arpa is an alias for 144.129/25.242.1.192.in-addr.arpa. 176 144.129/25.242.1.192.in-addr.arpa domain name pointer pc5.instageni.gpolab.bbn.com. 177 Host 145.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 178 Host 146.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 179 Host 147.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 180 Host 148.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 181 Host 149.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 182 Host 150.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 183 Host 151.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 184 Host 152.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 185 Host 153.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 186 Host 154.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 187 Host 155.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 188 Host 156.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 189 Host 157.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 190 Host 158.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 191 Host 159.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 192 Host 160.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 193 Host 161.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 194 Host 162.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 195 Host 163.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 196 Host 164.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 197 Host 165.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 198 Host 166.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 199 Host 167.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 200 Host 168.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 201 Host 169.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 202 Host 170.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 203 Host 171.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 204 Host 172.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 205 Host 173.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 206 Host 174.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 207 Host 175.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 208 Host 176.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 209 Host 177.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 210 Host 178.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 211 Host 179.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 212 Host 180.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 213 Host 181.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 214 Host 182.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 215 Host 183.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 216 Host 184.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 217 Host 185.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 218 Host 186.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 219 Host 187.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 220 Host 188.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 221 Host 189.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 222 Host 190.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 223 Host 191.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 224 Host 192.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 225 Host 193.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 226 Host 194.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 227 Host 195.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 228 Host 196.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 229 Host 197.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 230 Host 198.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 231 Host 199.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 232 Host 200.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 233 Host 201.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 234 Host 202.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 235 Host 203.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 236 Host 204.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 237 Host 205.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 238 Host 206.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 239 Host 207.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 240 Host 208.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 241 Host 209.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 242 Host 210.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 243 Host 211.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 244 Host 212.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 245 Host 213.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 246 Host 214.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 247 Host 215.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 248 Host 216.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 249 Host 217.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 250 Host 218.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 251 Host 219.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 252 Host 220.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 253 Host 221.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 254 Host 222.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 255 Host 223.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 256 Host 224.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 257 Host 225.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 258 Host 226.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 259 Host 227.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 260 Host 228.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 261 Host 229.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 262 Host 230.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 263 Host 231.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 264 Host 232.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 265 Host 233.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 266 Host 234.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 267 Host 235.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 268 Host 236.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 269 Host 237.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 270 Host 238.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 271 Host 239.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 272 Host 240.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 273 Host 241.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 274 Host 242.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 275 Host 243.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 276 Host 244.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 277 Host 245.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 278 Host 246.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 279 Host 247.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 280 Host 248.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 281 Host 249.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 282 Host 250.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 283 Host 251.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 284 Host 252.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 285 Host 253.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 286 Host 254.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 287 Host 255.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 288 }}} 289 290 We think that's normal: The in-use ones are in DNS, the not-in-use ones aren't. 291 292 I tried creating a VM with a public IP address, using this rspec: 293 294 {{{ 295 <?xml version="1.0" encoding="UTF-8"?> 296 <rspec xmlns="http://www.geni.net/resources/rspec/3" 297 xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" 298 xmlns:emulab="http://www.protogeni.net/resources/rspec/ext/emulab/1" 299 xs:schemaLocation="http://www.geni.net/resources/rspec/3 300 http://www.geni.net/resources/rspec/3/request.xsd" 301 type="request"> 302 <node client_id="carlin" exclusive="false"> 303 <sliver_type name="emulab-openvz" /> 304 <emulab:routable_control_ip /> 305 </node> 306 </rspec> 307 }}} 308 309 According to my manifest rspec, I got 310 311 {{{ 312 <emulab:vnode name="pcvm2-3"/> <host name="carlin.jbs.pgeni-gpolab-bbn-com.instageni.gpolab.bbn.com"/> <services> <login authentication="ssh-keys" hostname="pcvm2-3.instageni.gpolab.bbn.com" port="22" username="jbs"/> </services> </node> 313 }}} 314 315 That hostname and IP address now resolve: 316 317 {{{ 318 [15:03:32] jbs@anubis:/home/jbs/rspecs/request 319 +$ host pcvm2-3.instageni.gpolab.bbn.com 320 pcvm2-3.instageni.gpolab.bbn.com has address 192.1.242.150 321 322 [15:03:34] jbs@anubis:/home/jbs/rspecs/request 323 +$ host 192.1.242.150 324 150.242.1.192.in-addr.arpa is an alias for 150.129/25.242.1.192.in-addr.arpa. 325 150.129/25.242.1.192.in-addr.arpa domain name pointer pcvm2-3.instageni.gpolab.bbn.com. 326 }}} 327 328 After I delete my sliver: 329 330 {{{ 331 [15:03:58] jbs@anubis:/home/jbs/rspecs/request 332 +$ omni -a $am deletesliver $slicename 333 [* snip *] 334 Result Summary: Deleted sliver urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs on unspecified_AM_URN at https://instageni.gpolab.bbn.com:12369/protogeni/xmlrpc/am 335 INFO:omni: ============================================================ 336 337 [15:04:43] jbs@anubis:/home/jbs/rspecs/request 338 +$ host pcvm2-3.instageni.gpolab.bbn.com 339 Host pcvm2-3.instageni.gpolab.bbn.com not found: 3(NXDOMAIN) 340 341 [15:05:57] jbs@anubis:/home/jbs/rspecs/request 342 +$ host 192.1.242.150 343 150.242.1.192.in-addr.arpa is an alias for 150.129/25.242.1.192.in-addr.arpa. 344 150.129/25.242.1.192.in-addr.arpa domain name pointer pcvm2-3.instageni.gpolab.bbn.com. 345 }}} 346 347 That second one still works because it's cached on my local nameserver; if I ask the source, it's gone: 348 349 {{{ 350 [15:32:13] jbs@ops.instageni.gpolab.bbn.com:/users/jbs 351 +$ host 192.1.242.150 352 Host 150.242.1.192.in-addr.arpa. not found: 3(NXDOMAIN) 353 }}} 354 355 So, I think this is fine: Records exist when they're in use, and not when they're not, and that's fine. 356 357 = Step 3: GPO requests and receives administrator accounts = 358 359 == Step 3A: GPO requests access to boss and ops nodes == 122 360 123 361 '''Using:''' … … 146 384 }}} 147 385 148 ==== Results of testing step 3A: 2012-05-15 ==== 149 150 ''Note: this is being tested on the Utah GENI rack, where only Chaos has an account. So Tim and Josh will not be testing, and the hosts to test are `boss.utah.geniracks.net` and `ops.utah.geniracks.net`.'' 386 === Results of testing step 3A: 2012-12-20 === 387 388 I followed the procedure at https://users.emulab.net/trac/protogeni/wiki/RackAdminAccounts#AdminAccountsinEmulab to join the emulab-ops project, and once the Utah folks approved that and made an admin, I was able to log in to boss and ops, and use sudo: 389 390 {{{ 391 [15:50:40] jbs@anubis:/home/jbs 392 +$ ssh ops.instageni.gpolab.bbn.com sudo whoami 393 root 394 395 [15:50:50] jbs@anubis:/home/jbs 396 +$ ssh boss.instageni.gpolab.bbn.com sudo whoami 397 root 398 }}} 399 400 I asked Chaos and Tim to follow the procedure at that URL as well, and they did, and I approved their accounts by following the procedure at https://users.emulab.net/trac/protogeni/wiki/RackAdminAccounts#AddingmoreadminaccountstoEmulab, and they confirmed that they could log in to boss and ops. 401 402 === Results of testing step 3A: 2012-05-15 === 403 404 ''Note: This test was run on the Utah rack, where only Chaos has an account. So Tim and Josh will not be testing, and the hosts to test are `boss.utah.geniracks.net` and `ops.utah.geniracks.net`.'' 151 405 152 406 * Chaos successfully used public-key login and sudo from a BBN subnet (128.89.68.0/23) to boss: … … 183 437 }}} 184 438 185 == = Step 3B: GPO requests access to FOAM VM ===439 == Step 3B: GPO requests access to FOAM VM == 186 440 187 441 * Request accounts for GPO ops staffers on foam.instageni.gpolab.bbn.com … … 202 456 }}} 203 457 204 ==== Results of testing step 3B: 2012-07-04 ==== 458 === Results of testing step 3B: 2012-12-20 === 459 460 I was named as the site admin in the site survey, and was given an account on the FOAM VM. I was able to log in and use sudo: 461 462 {{{ 463 [15:57:46] jbs@anubis:/home/jbs 464 +$ ssh foam.instageni.gpolab.bbn.com sudo whoami 465 root 466 }}} 467 468 I then created accounts for Chaos and Tim, following the procedure at https://users.emulab.net/trac/protogeni/wiki/RackAdminAccounts#AdminAccountsonInstaGeniRacks. I got their public keys from their Emulab accounts, and put them into chaos.keys and tupty.keys in my homedir, and then: 469 470 {{{ 471 sudo /usr/local/bin/mkadmin.pl chaos chaos.keys 472 sudo /usr/local/bin/mkadmin.pl tupty tupty.keys 473 }}} 474 475 They then confirmed that they could log in, and run 'sudo whoami'. 476 477 === Results of testing step 3B: 2012-07-04 === 478 479 ''Note: This test was run on the Utah rack." 205 480 206 481 * Chaos can SSH to foam.utah.geniracks.net: … … 211 486 * Documentation: https://help.ubuntu.com/ 212 487 Last login: Tue Jul 3 12:57:20 2012 from capybara.bbn.com 213 foam,[~],09:33(0)$ 488 foam,[~],09:33(0)$ 214 489 }}} 215 490 * Chaos can sudo on foam: … … 219 494 }}} 220 495 221 == = Step 3C: GPO requests access to FlowVisor VM ===496 == Step 3C: GPO requests access to FlowVisor VM == 222 497 223 498 * Request accounts for GPO ops staffers on flowvisor.instageni.gpolab.bbn.com … … 238 513 }}} 239 514 240 ==== Results of testing step 3C: 2012-11-12 ==== 515 === Results of testing step 3C: 2012-12-20 === 516 517 I was named as the site admin in the site survey, and was given an account on the !FlowVisor VM. I was able to log in and use sudo: 518 519 {{{ 520 [16:11:41] jbs@anubis:/home/jbs 521 +$ ssh flowvisor.instageni.gpolab.bbn.com sudo whoami 522 root 523 }}} 524 525 I then created accounts for Chaos and Tim, following the procedure at https://users.emulab.net/trac/protogeni/wiki/RackAdminAccounts#AdminAccountsonInstaGeniRacks. I got their public keys from their Emulab accounts, and put them into chaos.keys and tupty.keys in my homedir, and then: 526 527 {{{ 528 sudo /usr/local/bin/mkadmin.pl chaos chaos.keys 529 sudo /usr/local/bin/mkadmin.pl tupty tupty.keys 530 }}} 531 532 They then confirmed that they could log in, and run 'sudo whoami'. 533 534 === Results of testing step 3C: 2012-11-12 === 535 536 ''Note: This test was run on the Utah rack." 241 537 242 538 * Chaos can SSH to flowvisor.utah.geniracks.net: … … 254 550 }}} 255 551 256 == = Step 3D: GPO requests access to infrastructure server ===552 == Step 3D: GPO requests access to infrastructure server == 257 553 258 554 * Request accounts for GPO ops staffers on the VM server node which runs boss, ops, foam, and flowvisor … … 273 569 }}} 274 570 275 ==== Results of testing step 3D: 2012-05-15 ==== 276 277 ''Note: this is being tested on the Utah GENI rack, where only Chaos has an account. So Tim and Josh will not be testing, and the host to test is `utah.control.geniracks.net`. 571 === Results of testing step 3D: 2012-12-20 === 572 573 I was named as the site admin in the site survey, and was given an account on the control host. I was able to log in and use sudo: 574 575 {{{ 576 [16:14:44] jbs@anubis:/home/jbs 577 +$ ssh control.instageni.gpolab.bbn.com sudo whoami 578 root 579 }}} 580 581 I then created accounts for Chaos and Tim, following the procedure at https://users.emulab.net/trac/protogeni/wiki/RackAdminAccounts#AdminAccountsonInstaGeniRacks. I got their public keys from their Emulab accounts, and put them into chaos.keys and tupty.keys in my homedir, and then: 582 583 {{{ 584 sudo /usr/local/bin/mkadmin.pl chaos chaos.keys 585 sudo /usr/local/bin/mkadmin.pl tupty tupty.keys 586 }}} 587 588 They then confirmed that they could log in, and run 'sudo whoami'. 589 590 === Results of testing step 3D: 2012-05-15 === 591 592 ''Note: This test was run on the Utah rack, where only Chaos has an account. So Tim and Josh will not be testing, and the host to test is `utah.control.geniracks.net`. 278 593 279 594 * Chaos successfully used public-key login and sudo from a BBN subnet (128.89.68.0/23) to the control node: … … 299 614 }}} 300 615 301 == = Step 3E: GPO requests access to network devices ===616 == Step 3E: GPO requests access to network devices == 302 617 303 618 '''Using:''' … … 311 626 {{{ 312 627 show running-config 313 show mac-address-table 314 }}} 315 316 ==== Results of testing step 3E: 2012-05-16 ==== 628 show mac-address 629 }}} 630 631 === Results of testing step 3E: 2012-12-20 === 632 633 I logged in to boss, and found the switch IP addresses in /etc/hosts: 634 635 {{{ 636 [16:24:49] jbs@boss.instageni.gpolab.bbn.com:/users/jbs 637 +$ grep procurve /etc/hosts 638 10.2.1.253 procurve1-alt 639 10.1.1.253 procurve1 640 10.3.1.253 procurve2 641 }}} 642 643 I predict that I can reach these from boss, which has an interface on 10.1.1.0/24 and 10.3.1.0/24. 644 645 I got the switch passwords out of /usr/testbed/etc/switch.pswd, and logged in and ran those commands. 646 647 On procurve1, the control switch: 648 649 {{{ 650 HP-E2620-24# show running-config 651 652 Running configuration: 653 654 ; J9623A Configuration Editor; Created on release #RA.15.05.0006 655 ; Ver #01:01:00 656 657 hostname "HP-E2620-24" 658 ip default-gateway 10.1.1.254 659 vlan 1 660 name "DEFAULT_VLAN" 661 untagged 1-22,25-28 662 ip address 10.254.254.253 255.255.255.0 663 no untagged 23-24 664 ip igmp 665 exit 666 vlan 11 667 name "control-alternate" 668 untagged 24 669 ip address 10.2.1.253 255.255.255.0 670 ip igmp 671 exit 672 vlan 10 673 name "control-hardware" 674 untagged 23 675 ip address 10.1.1.253 255.255.255.0 676 exit 677 no web-management 678 snmp-server community "e8074ebc557d" unrestricted 679 aaa authentication ssh login public-key 680 aaa authentication ssh enable public-key 681 management-vlan 10 682 no dhcp config-file-update 683 password manager 684 password operator 685 }}} 686 687 and 688 689 {{{ 690 HP-E2620-24# show mac-address 691 692 Status and Counters - Port Address Table 693 694 MAC Address Port VLAN 695 ------------- ----- ---- 696 00163e-f0a000 25 1 697 00163e-f0a100 25 1 698 02072d-51f46a 25 1 699 022b51-e42941 1 1 700 02325e-a545df 3 1 701 0240fd-8370c0 3 1 702 026c2b-9278af 1 1 703 029e26-f15299 25 1 704 02e3b1-81dd08 3 1 705 10604b-9717cc 6 1 706 10604b-976a78 22 1 707 10604b-97dae2 2 1 708 10604b-97f7e4 4 1 709 10604b-980386 10 1 710 10604b-9b47fc 1 1 711 10604b-9b69b8 25 1 712 10604b-9b8214 3 1 713 b4b52f-69db40 8 1 714 ccef48-7a7aa9 26 1 715 000099-989701 23 10 716 10604b-9b69ba 23 10 717 }}} 718 719 On procurve2, the dataplane switch: 720 721 {{{ 722 HP-E5406zl# show running-config 723 724 Running configuration: 725 726 ; J8697A Configuration Editor; Created on release #K.15.06.5008 727 ; Ver #02:10.0d:1f 728 729 hostname "HP-E5406zl" 730 module 1 type J9550A 731 module 5 type J9550A 732 interface E1 733 speed-duplex auto-1000 734 exit 735 interface E2 736 speed-duplex auto-1000 737 exit 738 739 [* snip -- long *] 740 }}} 741 742 and 743 744 {{{ 745 HP-E5406zl# show mac-address 746 747 Status and Counters - Port Address Table 748 749 MAC Address Port VLAN 750 ------------- ------ ---- 751 000099-989703 E20 10 752 00163e-f0a103 E20 10 753 10604b-9b69ce E20 10 754 0012e2-b8a5d0 E24 1750 755 }}} 756 757 It wasn't entirely obvious how to find the switch names, usernames, and passwords; I created InstaGENI ticket [instaticket:71] to track the task of documenting that. 758 759 === Results of testing step 3E: 2012-05-16 === 317 760 318 761 ''Note: testing using Utah rack'' … … 329 772 Escape character is '^]'. 330 773 ... 331 Password: 774 Password: 332 775 ... 333 ProCurve Switch 2610-24# show running-config 776 ProCurve Switch 2610-24# show running-config 334 777 335 778 Running configuration: … … 337 780 ; J9085A Configuration Editor; Created on release #R.11.70 338 781 ... 339 ProCurve Switch 2610-24# show mac-address 782 ProCurve Switch 2610-24# show mac-address 340 783 341 784 Status and Counters - Port Address Table … … 345 788 Do you want to log out [y/n]? y 346 789 }}} 347 * Login from boss to procurve2 using password in `/usr/testbed/etc/switch.pswd`:790 * Login from boss to procurve2 using password in `/usr/testbed/etc/switch.pswd`: 348 791 {{{ 349 792 boss,[~],10:23(0)$ telnet procurve2 … … 352 795 Escape character is '^]'. 353 796 ... 354 Password: 797 Password: 355 798 ... 356 799 ProCurve Switch 6600ml-48G-4XG# show running-config … … 360 803 ; J9452A Configuration Editor; Created on release #K.14.41 361 804 ... 362 ProCurve Switch 6600ml-48G-4XG# show mac-address 805 ProCurve Switch 6600ml-48G-4XG# show mac-address 363 806 364 807 Status and Counters - Port Address Table … … 369 812 }}} 370 813 371 == = Step 3F: GPO requests access to shared OpenVZ nodes ===814 == Step 3F: GPO requests access to shared OpenVZ nodes == 372 815 373 816 '''Using:''' … … 383 826 * Access to the node is as root and/or it is possible to run a command via sudo 384 827 385 ==== Results of testing step 3F: 2012-05-16 ==== 828 === Results of testing step 3F: 2012-12-20 === 829 830 I looked at https://boss.instageni.gpolab.bbn.com/nodecontrol_list.php3?showtype=dl360 to find a shared node, and found that pc1 is one (it's in EID 'shared-nodes'). 831 832 On boss, I was able to to run 'sudo whoami' on pc1: 833 834 {{{ 835 [16:51:06] jbs@boss.instageni.gpolab.bbn.com:/users/jbs 836 +$ ssh pc1 sudo whoami 837 root 838 }}} 839 840 And I was able to log in directly to pc1 as root: 841 842 {{{ 843 [16:51:50] jbs@boss.instageni.gpolab.bbn.com:/users/jbs 844 +$ sudo ssh pc1 whoami 845 root 846 }}} 847 848 === Results of testing step 3F: 2012-05-16 === 386 849 387 850 * pc5 is currently configured as a shared OpenVZ node: it is in the experiment `emulab-ops/shared-nodes` … … 406 869 }}} 407 870 408 == = Step 3G: GPO requests access to iLO remote management interfaces for experimental nodes ===871 == Step 3G: GPO requests access to iLO remote management interfaces for experimental nodes == 409 872 410 873 '''Using:''' 411 874 * GPO requests access to the experimental node iLO management interfaces from instageni-ops 412 * Determine how to use these interfaces to access remote control and remote console interfaces for experimental nodes 875 * Determine how to use these interfaces to access remote control and remote console interfaces for experimental nodes 413 876 * For each experimental node in the BBN rack: 414 * Access the iLO interface and view status information 877 * Access the iLO interface and view status information 415 878 * View the interface for remotely power-cycling the node 416 879 * Launch the remote console for the node … … 422 885 * Launching the remote console at each iLO succeeds 423 886 424 === = Results of testing step 3G: 2012-05-16 ====887 === Results of testing step 3G: 2012-05-16 === 425 888 426 889 * Here's Utah's information about how to use the elabman consoles: … … 436 899 Are you sure you want to continue connecting (yes/no)? yes 437 900 Warning: Permanently added '155.98.34.104' (DSA) to the list of known hosts. 438 elabman@155.98.34.104's password: 901 elabman@155.98.34.104's password: 439 902 User:elabman logged-in to ILOUSE211XXJS.utah.geniracks.net(155.98.34.104) 440 903 iLO 3 Advanced 1.28 at Jan 13 2012 … … 481 944 {{{ 482 945 Secure Connection Failed 483 946 484 947 An error occurred during a connection to 155.98.34.103. 485 948 … … 504 967 * pc1 (.103): 505 968 {{{ 506 Issued To 507 Issued By 508 Valid From 509 Valid Until 510 Serial Number 969 Issued To CN=ILOUSE211XXJR.utah.geniracks.net, OU=ISS, O=Hewlett-Packard Company, L=Houston, ST=Texas, C=US 970 Issued By C=US, ST=TX, L=Houston, O=Hewlett-Packard Company, OU=ISS, CN=iLO3 Default Issuer (Do not trust) 971 Valid From Wed, 11 Jan 2012 972 Valid Until Mon, 12 Jan 2037 973 Serial Number 57 511 974 }}} 512 975 * pc2 (.102): 513 976 {{{ 514 Issued To 515 Issued By 516 Valid From 517 Valid Until 518 Serial Number 977 Issued To CN=ILOUSE211XXJP.utah.geniracks.net, OU=ISS, O=Hewlett-Packard Company, L=Houston, ST=Texas, C=US 978 Issued By C=US, ST=TX, L=Houston, O=Hewlett-Packard Company, OU=ISS, CN=iLO3 Default Issuer (Do not trust) 979 Valid From Wed, 11 Jan 2012 980 Valid Until Mon, 12 Jan 2037 981 Serial Number 55 519 982 }}} 520 983 * pc3 (.104): 521 984 {{{ 522 Issued To 523 Issued By 524 Valid From 525 Valid Until 526 Serial Number 985 Issued To CN=ILOUSE211XXJS.utah.geniracks.net, OU=ISS, O=Hewlett-Packard Company, L=Houston, ST=Texas, C=US 986 Issued By C=US, ST=TX, L=Houston, O=Hewlett-Packard Company, OU=ISS, CN=iLO3 Default Issuer (Do not trust) 987 Valid From Wed, 11 Jan 2012 988 Valid Until Mon, 12 Jan 2037 989 Serial Number 57 527 990 }}} 528 991 * pc4 (.105): 529 992 {{{ 530 Issued To 531 Issued By 532 Valid From 533 Valid Until 534 Serial Number 993 Issued To CN=ILOUSE211XXJT.utah.geniracks.net, OU=ISS, O=Hewlett-Packard Company, L=Houston, ST=Texas, C=US 994 Issued By C=US, ST=TX, L=Houston, O=Hewlett-Packard Company, OU=ISS, CN=iLO3 Default Issuer (Do not trust) 995 Valid From Wed, 11 Jan 2012 996 Valid Until Mon, 12 Jan 2037 997 Serial Number 53 535 998 }}} 536 999 * pc5 (.101): 537 1000 {{{ 538 Issued To 539 Issued By 540 Valid From 541 Valid Until 542 Serial Number 1001 Issued To CN=ILOUSE211XXJN.utah.geniracks.net, OU=ISS, O=Hewlett-Packard Company, L=Houston, ST=Texas, C=US 1002 Issued By C=US, ST=TX, L=Houston, O=Hewlett-Packard Company, OU=ISS, CN=iLO3 Default Issuer (Do not trust) 1003 Valid From Wed, 11 Jan 2012 1004 Valid Until Mon, 12 Jan 2037 1005 Serial Number 54 543 1006 }}} 544 1007 * Testing remote console: … … 551 1014 Summary: the Fedora OS images don't have VGA support, so they don't work with the remote consoles. The MFSes and the boot sequence itself do have such support. 552 1015 553 == = Step 3H: GPO gets access to allocated bare metal nodes by default ===1016 == Step 3H: GPO gets access to allocated bare metal nodes by default == 554 1017 555 1018 '''Prerequisites:''' … … 564 1027 * Login to the node using root's SSH key succeeds 565 1028 566 === = Results of testing step 3H: 2012-05-16 ====1029 === Results of testing step 3H: 2012-05-16 === 567 1030 568 1031 * The second prerequisite has not been met (all bare metal nodes are unallocated at this time). To meet it, i will try this via omni: … … 573 1036 ... 574 1037 $ omni createsliver -a http://www.utah.geniracks.net/protoge 575 ni/xmlrpc/am ecgtest ~/omni/rspecs/request/misc/protogeni-any-one-node.rspec 1038 ni/xmlrpc/am ecgtest ~/omni/rspecs/request/misc/protogeni-any-one-node.rspec 576 1039 INFO:omni:Loading config file /home/chaos/omni/omni_pgeni 577 1040 INFO:omni:Using control framework pg … … 585 1048 URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am 586 1049 --> 587 INFO:omni:<rspec xmlns="http://protogeni.net/resources/rspec/0.2"> 588 <node component_manager_urn="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_manager_uuid="c133552e-688f-11e1-8314-00009b6224df" component_urn="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_uuid="49574b15-753a-11e1-a16c-00009b6224df" exclusive="1" hostname="pc3.utah.geniracks.net" sliver_urn="urn:publicid:IDN+utah.geniracks.net+sliver+364" sliver_uuid="e326971a-9f74-11e1-af1c-00009b6224df" sshdport="22" virtual_id="geni1" virtualization_subtype="raw" virtualization_type="emulab-vnode"> 589 <services> <login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="22" username="chaos"/> </services> </node> 1050 INFO:omni:<rspec xmlns="http://protogeni.net/resources/rspec/0.2"> 1051 <node component_manager_urn="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_manager_uuid="c133552e-688f-11e1-8314-00009b6224df" component_urn="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_uuid="49574b15-753a-11e1-a16c-00009b6224df" exclusive="1" hostname="pc3.utah.geniracks.net" sliver_urn="urn:publicid:IDN+utah.geniracks.net+sliver+364" sliver_uuid="e326971a-9f74-11e1-af1c-00009b6224df" sshdport="22" virtual_id="geni1" virtualization_subtype="raw" virtualization_type="emulab-vnode"> 1052 <services> <login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="22" username="chaos"/> </services> </node> 590 1053 </rspec> 591 1054 INFO:omni: ------------------------------------------------------------ … … 601 1064 602 1065 Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest expires on 2012-05-17 17:32:24 UTC 603 Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. 604 INFO:omni: =========================================================== =1066 Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. 1067 INFO:omni: =========================================================== 605 1068 }}} 606 1069 That looks good, and implies that pc3.utah.geniracks.net was allocated for my experiment. … … 621 1084 Password authentication is disabled to avoid man-in-the-middle attacks. 622 1085 Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks. 623 [root@geni1 ~]# 624 }}} 625 626 627 = = Step 4: GPO inventories the rack based on our own processes ==628 629 == = Step 4A: Inventory and label physical rack contents ===1086 [root@geni1 ~]# 1087 }}} 1088 1089 1090 = Step 4: GPO inventories the rack based on our own processes = 1091 1092 == Step 4A: Inventory and label physical rack contents == 630 1093 631 1094 '''Using:''' … … 633 1096 * Use available rack documentation to determine the correct name of each object 634 1097 * If any objects can't be found in public documentation, compare to internal notes, and iterate with InstaGENI 635 * Physically label each device in the rack with its name on front and back 1098 * Physically label each device in the rack with its name on front and back 636 1099 * Inventory all hardware details for rack contents on OpsHardwareInventory 637 1100 * Add an ascii rack diagram to OpsHardwareInventory … … 642 1105 * We succeed in labelling the devices and adding their hardware details and locations to our inventory 643 1106 644 == = Step 4B: Inventory rack power requirements ===1107 == Step 4B: Inventory rack power requirements == 645 1108 646 1109 '''Using:''' … … 650 1113 * We succeed in locating and documenting information about rack power circuits in use 651 1114 652 == = Step 4C: Inventory rack network connections ===1115 == Step 4C: Inventory rack network connections == 653 1116 654 1117 '''Using:''' … … 660 1123 * We are able to determine the OpenFlow configuration of the rack dataplane switch 661 1124 662 == = Step 4D: Verify government property accounting for the rack ===1125 == Step 4D: Verify government property accounting for the rack == 663 1126 664 1127 '''Using:''' … … 670 1133 * We receive a single property tag for the rack, as expected 671 1134 672 = = Step 5: Configure operational alerting for the rack ==673 674 == = Step 5A: GPO installs active control network monitoring ===1135 = Step 5: Configure operational alerting for the rack = 1136 1137 == Step 5A: GPO installs active control network monitoring == 675 1138 676 1139 '''Using:''' … … 687 1150 * Each monitored IPs is successfully available at least once 688 1151 689 === = Results of testing step 5A: 2012-05-18 ====1152 === Results of testing step 5A: 2012-05-18 === 690 1153 691 1154 ''Note: this is partially blocked, but i want to get a basic ability to detect outages, so am installing what i can.'' … … 695 1158 * The switch management IPs are private, so i think this is effectively blocked until the BBN rack arrives, since i don't want to install ganglia on Utah's rack (and i think the right thing to do here is to ping the devices from boss). 696 1159 697 === = Results of testing step 5A: 2012-07-04 ====1160 === Results of testing step 5A: 2012-07-04 === 698 1161 699 1162 * Pings for boss and ops are still operating … … 702 1165 * The switch management IPs are private, so i think this is effectively blocked until the BBN rack arrives, since i don't want to install ganglia on Utah's rack (and i think the right thing to do here is to ping the devices from boss). 703 1166 704 == = Step 5B: GPO installs active shared dataplane monitoring ===1167 == Step 5B: GPO installs active shared dataplane monitoring == 705 1168 706 1169 '''Using:''' … … 712 1175 * The monitored IP is successfully available at least once 713 1176 714 === = Results of testing step 5B: 2012-07-04 ====715 716 * Tim added a sliver for monitoring the dataplane subnet yesterday, under slice URN `urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+tuptymon` 1177 === Results of testing step 5B: 2012-07-04 === 1178 1179 * Tim added a sliver for monitoring the dataplane subnet yesterday, under slice URN `urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+tuptymon` 717 1180 * Pings from BBN NLR to the Utah testpoint are showing up at [http://monitor.gpolab.bbn.com/connectivity/campus.html], and have been successful at least once. 718 1181 * When the UEN flowvisor is installed, this sliver will need to be updated to use the new path. 719 1182 720 == = Step 5C: GPO gets access to monitoring information about the BBN rack ===1183 == Step 5C: GPO gets access to monitoring information about the BBN rack == 721 1184 722 1185 '''Using:''' … … 728 1191 * I can see detailed information about any services checked 729 1192 730 == = Step 5D: GPO receives e-mail about BBN rack alerts ===1193 == Step 5D: GPO receives e-mail about BBN rack alerts == 731 1194 732 1195 '''Using:''' … … 743 1206 * The duration of the outage 744 1207 745 === = Results of testing step 5D: 2012-07-04 ====1208 === Results of testing step 5D: 2012-07-04 === 746 1209 747 1210 * I've been subscribed to `genirack-ops@flux.utah.edu` since 2012-05-24, and have received a number of e-mail messages. … … 751 1214 * This list does not detect and notify about rack problems: InstaGENI does not have a list for that purpose. 752 1215 753 = = Step 6: Setup contact info and change control procedures ==754 755 == = Step 6A: InstaGENI operations staff should subscribe to response-team ===1216 = Step 6: Setup contact info and change control procedures = 1217 1218 == Step 6A: InstaGENI operations staff should subscribe to response-team == 756 1219 757 1220 '''Using:''' … … 764 1227 }}} 765 1228 766 === = Results of testing step 6A: 2012-07-04 ====1229 === Results of testing step 6A: 2012-07-04 === 767 1230 768 1231 Per daulis, Rob is subscribed, but no other InstaGENI operators, and not the list itself. I opened [instaticket:43] for this. 769 1232 770 === = Results of testing step 6A: 2012-11-12 ====1233 === Results of testing step 6A: 2012-11-12 === 771 1234 772 1235 Verified that `instageni-ops@flux.utah.edu` is subscribed to response-team. 773 1236 774 == = Step 6B: InstaGENI operations staff should provide contact info to GMOC ===1237 == Step 6B: InstaGENI operations staff should provide contact info to GMOC == 775 1238 776 1239 '''Using:''' … … 780 1243 * E-mail `gmoc@grnoc.iu.edu` and request verification that the InstaGENI organization contact info is up-to-date. 781 1244 782 === = Results of testing step 6B: 2012-07-04 ====1245 === Results of testing step 6B: 2012-07-04 === 783 1246 784 1247 * I e-mailed GMOC, and will follow up when i get a response. 785 1248 786 === = Results of testing step 6B: 2012-11-12 ====787 788 * I iterated with Eldar at GMOC, and determined that GMOC has primary and escalation e-mail contacts for InstaGENI Utah, and that we determined that it isn't a requirement to have phone contacts if nothing appropriate exists. 789 790 == = Step 6C: Negotiate an interim change control notification procedure ===1249 === Results of testing step 6B: 2012-11-12 === 1250 1251 * I iterated with Eldar at GMOC, and determined that GMOC has primary and escalation e-mail contacts for InstaGENI Utah, and that we determined that it isn't a requirement to have phone contacts if nothing appropriate exists. 1252 1253 == Step 6C: Negotiate an interim change control notification procedure == 791 1254 792 1255 '''Using:''' … … 796 1259 * InstaGENI agrees to send notifications about planned outages and changes. 797 1260 798 === = Results of testing step 6C: 2012-05-29 ====1261 === Results of testing step 6C: 2012-05-29 === 799 1262 800 1263 InstaGENI has agreed to notify instageni-design when there are outages.