| 1 | = Provisioning Service Evaluation = |
| 2 | The [http://raven.cs.arizona.edu/projects/project Raven Provisioning Service] project provides a suite of tools: |
| 3 | * Raven - Meta-tool that provides combined interface to Stork, Tempest, and Owl tools. |
| 4 | * Stork - Secure package management |
| 5 | * Tempest - Collective package management |
| 6 | * Owl - Slice monitoring |
| 7 | * IFTD - Efficient file transfer |
| 8 | * Service Monitor - Monitor services and restart as necessary |
| 9 | |
| 10 | The Raven version 2.2.14-2652 and the Owl v1.0 packages were evaluated for this effort. |
| 11 | |
| 12 | Time Frame: This evaluation took place Jyly 20-28, 2010. |
| 13 | |
| 14 | = Provisioning Service Findings = |
| 15 | |
| 16 | As Owl was reviewed, information was sufficient to get to the resource and use them. |
| 17 | |
| 18 | Found it difficult to determine the content of my SFA GID credentials, used the |
| 19 | [http://groups.geni.net/geni/attachment/wiki/SliceFedArch/SFA2.0.pdf Slice-Based Federation Architecture Specification] document |
| 20 | to determine what this credential is about. |
| 21 | |
| 22 | Initially Stork Version 2.0b was evaluated, which was deemed incorrect. After discussion |
| 23 | with Scott Baker found that the repository web interface is deprecated and that the command |
| 24 | line raven tool should be used. |
| 25 | |
| 26 | Also while installing SFA on an ubuntu system found that I should not use the trunk version, |
| 27 | but that the tagged version should be used due to the fact that the SFA trunk uses a |
| 28 | different credential and certificate format than the PlanetLab's public SFA server. |
| 29 | The Tag version sfa-0.9-14 was used. |
| 30 | |
| 31 | = Provisioning Service How-to = |
| 32 | |
| 33 | The GENI Integration section at the [http://raven.cs.arizona.edu/projects/project Raven] page states that there are |
| 34 | 2 GENI Integration tasks done: |
| 35 | * Authenticating using SFA credentials - The Raven repository has been updated to allow logging in via SFA (Cluster B) GIDs and/or Credentials instead of using a !PlanetLab account. The supported format is the GENI Cluster B GID or Credential file. This support has been developed in conjunction with Princeton's Geniwrapper. |
| 36 | * Owl Support on SeattleGENI - The Owl client has been ported to the Seattle Million Node Geni project. |
| 37 | |
| 38 | == Raven tool Authentication using SFA Credentials How-to == |
| 39 | |
| 40 | The Raven tool is listed as tested on Fedora 8 and requiring: Python 2.5 or newer, with |
| 41 | Python libraries: pyOpenSSL, PyXML, and M2Crypto, also required is rpm-build package. Instruction |
| 42 | from the [http://raven.cs.arizona.edu/projects/project/wiki/RavenPage Raven Enduser Tool] page |
| 43 | were used to executed the steps captured below. |
| 44 | |
| 45 | 1. The SFA should be installed which can be achieved in two ways. |
| 46 | |
| 47 | 1a. '''SFA install from SVN check out''' |
| 48 | {{{ |
| 49 | $ svn co http://svn.planet-lab.org/svn/sfa/tags/sfa-0.9-14 sfa-0.9-14 |
| 50 | $ cd sfa-0.9-14/ |
| 51 | $ sudo make instal |
| 52 | }}} |
| 53 | SFA required two additionall packages to be installed: |
| 54 | * libxml2: |
| 55 | {{{ |
| 56 | $ git clone git://git.gnome.org/libxml2 |
| 57 | $ cd libxml2 |
| 58 | $ ./autogen |
| 59 | $ make |
| 60 | $ sudo make install |
| 61 | }}} |
| 62 | * libxslt-1.1.22 downloaded from [from http://www.at.linuxfromscratch.org/blfs/view/6.3/general/libxslt.html libxslt] site. |
| 63 | {{{ |
| 64 | |
| 65 | $ tar xvzf libxslt-1.1.22.tar.gz |
| 66 | $ cd libxslt-1.1.22 |
| 67 | $ ./configure --prefix=/usr |
| 68 | $ make && sudo make install |
| 69 | }}} |
| 70 | |
| 71 | 1b. '''SFA install with yum''' |
| 72 | |
| 73 | Edit the vi /etc/yum.repos.d/myplc.repo, which should point to the latest RPMS: |
| 74 | {{{ |
| 75 | [myplc] |
| 76 | name=MyPLC |
| 77 | baseurl=http://build.planet-lab.org/planetlab/f8/planetlab-f8-i386-5.0-rc14/RPMS/ |
| 78 | enabled=1 |
| 79 | gpgcheck=0 |
| 80 | }}} |
| 81 | |
| 82 | Install the sfa and sfa-client packages: |
| 83 | {{{ |
| 84 | $ sudo yum install sfa sfa-client |
| 85 | }}} |
| 86 | |
| 87 | 2. Created an ~/.sfi/sfi_config containing: |
| 88 | {{{ |
| 89 | SFI_AUTH='plc.bbn' |
| 90 | SFI_USER='plc.bbn.lnevers' |
| 91 | SFI_REGISTRY='http://www.planet-lab.org:12345/' |
| 92 | SFI_SM='http://www.planet-lab.org:12347/' |
| 93 | SFI_GENI_AM='http://www.planet-lab.org:12348' |
| 94 | }}} |
| 95 | |
| 96 | 3. Copy RSA private key to ~/.sfi directory: |
| 97 | {{{ |
| 98 | $ cp ~/.ssh/id_rsa ~/.sfi/lnevers.pkey |
| 99 | }}} |
| 100 | |
| 101 | 4. Get Planet Lab Central listing to verify that settings are correct: |
| 102 | {{{ |
| 103 | $ sfi.py list plc.bbn |
| 104 | $ sfi.py show plc.bbn |
| 105 | }}} |
| 106 | |
| 107 | The above command will generate 2 additional files in the ~/.sfi directory: ''<username>.cert'' and ''<username>.cred'' |
| 108 | |
| 109 | 5. Using the files ''raven'' and ''arizona-lib'' from the [http://raven.cs.arizona.edu/projects/project/wiki/RavenDownload Raven Software Download] area and installed packages: |
| 110 | |
| 111 | {{{ |
| 112 | $ sudo rpm --install arizona-lib-2.2.14-2652.noarch.rpm raven-2.2.14-2652.noarch.rpm |
| 113 | }}} |
| 114 | |
| 115 | The above creates a ''/usr/local/raven'' and places a binary in ''/usr/bin/''. |
| 116 | |
| 117 | 6. Created a Raven Experiment: |
| 118 | {{{ |
| 119 | |
| 120 | $ mkdir ~/Provisioning/raven_experiment |
| 121 | $ cd ~/Provisioning/raven_experiment/ |
| 122 | $ raven create |
| 123 | |
| 124 | Experiment name: [None] ? ln_raven_experiment |
| 125 | |
| 126 | Location of your private key: [None] ? ~/.sfi/lnevers.pkey |
| 127 | |
| 128 | A GENI Credential file may be used to automatically upload files to the Raven |
| 129 | repository. This file is optional, but without it you will be responsible |
| 130 | for manually uploading the files. |
| 131 | |
| 132 | Location of GENI cred file: [None] ? ~/.sfi/lnevers.cred |
| 133 | |
| 134 | Raven may be configured to manage the config files on your slices for you. |
| 135 | You may enter multiple slice names separated by commas. Enterning no |
| 136 | slice names will cause packages and tpfiles to be uploaded, but not |
| 137 | slice configuration files. |
| 138 | |
| 139 | Slices that should be managed by this experiment: [] ? bbn_gusheval |
| 140 | |
| 141 | The packages.pacman file controls which packages will be installed |
| 142 | on your nodes. This tool can be configured to automatically manage |
| 143 | this file, by installing all of your packages on all of your nodes. |
| 144 | |
| 145 | Automatically manage your packages.pacman (y/n): [y] ? |
| 146 | $ |
| 147 | }}} |
| 148 | The above populates a raven.conf file: |
| 149 | {{{ |
| 150 | [packagerules] |
| 151 | noinstall = |
| 152 | |
| 153 | [container] |
| 154 | version = 1 |
| 155 | |
| 156 | [experiment] |
| 157 | name = ln_raven_experiment |
| 158 | slices = bbn_gusheval |
| 159 | |
| 160 | [manage_packages] |
| 161 | upgrade_owld = True |
| 162 | manage_packages = True |
| 163 | upgrade_stork = True |
| 164 | |
| 165 | [identity] |
| 166 | credname = /home/lnevers/.sfi/lnevers.cred |
| 167 | privatekeyname = /home/lnevers/.sfi/lnevers.pkey |
| 168 | |
| 169 | [dir] |
| 170 | uploaddir = upload |
| 171 | userdir = users |
| 172 | packagedir = packages |
| 173 | builderdir = build |
| 174 | tempestdir = tempest |
| 175 | configdir = config |
| 176 | }}} |
| 177 | |
| 178 | 7. Created a tar package in the ''raven_experiment'' directory, using instructions from [http://raven.cs.arizona.edu/projects/project/wiki/RavenTarPackage Raven Package Instructions] page: |
| 179 | |
| 180 | {{{ |
| 181 | $ mkdir helloworld |
| 182 | $ echo "date >> /tmp/helloworld.txt" > helloworld/autorun.sh |
| 183 | $ tar -czf helloworld-1.0.tar.gz helloworld |
| 184 | $ cp helloworld-1.0.tar.gz packages/. |
| 185 | }}} |
| 186 | The package must be published, from the experiment directory: |
| 187 | {{{ |
| 188 | $ cd raven_experiment |
| 189 | $ raven publish |
| 190 | building: helloworld2 |
| 191 | version incremented to 0.1 |
| 192 | built: ./packages/helloworld2-0.1-0.noarch.rpm |
| 193 | trusting: helloworld-1.0.tar.gz |
| 194 | trusting: helloworld2-0.1-0.noarch.rpm |
| 195 | trusting user: stork |
| 196 | trusting user: fedora8 |
| 197 | adding to packages.pacman: helloworld |
| 198 | adding to packages.pacman: helloworld2 |
| 199 | building: ln_raven_experiment_ravenconfig RPM |
| 200 | version incremented to 0.1 |
| 201 | re-building tpfiles due to change in tempest rpm |
| 202 | trusting: helloworld-1.0.tar.gz |
| 203 | trusting: helloworld2-0.1-0.noarch.rpm |
| 204 | trusting: ln_raven_experiment_ravenconfig-0.1-0.noarch.rpm |
| 205 | trusting user: stork |
| 206 | trusting user: fedora8 |
| 207 | adding to packages.pacman: helloworld |
| 208 | adding to packages.pacman: helloworld2 |
| 209 | building: ln_raven_experiment_ravenconfig RPM |
| 210 | already up-to-date |
| 211 | linking: helloworld-1.0.tar.gz |
| 212 | linking: helloworld2-0.1-0.noarch.rpm |
| 213 | linking: ln_raven_experiment_ravenconfig-0.1-0.noarch.rpm |
| 214 | signing: ln_raven_experiment.tpfile |
| 215 | signing: bbn_gusheval.stork.conf |
| 216 | repository: https://stork-repository.cs.arizona.edu/REPOAPI/ |
| 217 | uploading: bbn_gusheval.27fcf8b05f7cbbedcd5ca6bd2ba63a683d779d5d.stork.conf |
| 218 | True |
| 219 | uploading: ln_raven_experiment_ravenconfig-0.1-0.noarch.rpm |
| 220 | True |
| 221 | uploading: helloworld2-0.1-0.noarch.rpm |
| 222 | True |
| 223 | uploading: helloworld-1.0.tar.gz |
| 224 | True |
| 225 | uploading: ln_raven_experiment.27fcf8b05f7cbbedcd5ca6bd2ba63a683d779d5d.tpfile |
| 226 | True |
| 227 | $ |
| 228 | |
| 229 | }}} |
| 230 | |
| 231 | 8. Accessed the node via gush to verify that package raven_experiment is being run: |
| 232 | {{{ |
| 233 | $ ./gush -P 15555 |
| 234 | here |
| 235 | gush> Gush has learned about the slice bbn_gusheval. |
| 236 | gush> connect node2.lbnl.nodes.planet-lab.org |
| 237 | Connecting to host bbn_gusheval@node2.lbnl.nodes.planet-lab.org:61414. |
| 238 | bbn_gusheval@node2.lbnl.nodes.planet-lab.org:61414 has joined the mesh. |
| 239 | |
| 240 | gush> shell "ps -eaf|grep raven|egrep -v grep" |
| 241 | bbn_gusheval@nis-planet2.doshisha.ac.jp:61414,7493: root 7410 7371 28 14:48 ? 00:00:00 python /usr/bin/stork --upgrade ln_raven_experiment_ravenconfig |
| 242 | |
| 243 | }}} |
| 244 | |
| 245 | 9. It is also possible to install external packages by referencing their name. The package |
| 246 | sources that are included by default a Stork and Fedora 8 distribution packages. In this |
| 247 | next step two packages are installed from the Fedora 8 distribution: |
| 248 | {{{ |
| 249 | $ cd raven_experiment |
| 250 | $ echo > packages/emacs.name |
| 251 | $ echo > packages/java.name |
| 252 | $ raven publish |
| 253 | building: helloworld2 |
| 254 | already current: ./packages/helloworld2-0.1-0.noarch.rpm |
| 255 | trusting: helloworld-1.0.tar.gz |
| 256 | trusting: helloworld2-0.1-0.noarch.rpm |
| 257 | trusting: ln_raven_experiment_ravenconfig-0.1-0.noarch.rpm |
| 258 | trusting user: stork |
| 259 | trusting user: fedora8 |
| 260 | adding to packages.pacman: emacs |
| 261 | adding to packages.pacman: helloworld |
| 262 | adding to packages.pacman: helloworld2 |
| 263 | adding to packages.pacman: java |
| 264 | building: ln_raven_experiment_ravenconfig RPM |
| 265 | version incremented to 0.2 |
| 266 | re-building tpfiles due to change in tempest rpm |
| 267 | trusting: helloworld-1.0.tar.gz |
| 268 | trusting: helloworld2-0.1-0.noarch.rpm |
| 269 | trusting: ln_raven_experiment_ravenconfig-0.2-0.noarch.rpm |
| 270 | trusting user: stork |
| 271 | trusting user: fedora8 |
| 272 | adding to packages.pacman: emacs |
| 273 | adding to packages.pacman: helloworld |
| 274 | adding to packages.pacman: helloworld2 |
| 275 | adding to packages.pacman: java |
| 276 | building: ln_raven_experiment_ravenconfig RPM |
| 277 | already up-to-date |
| 278 | linking: helloworld-1.0.tar.gz |
| 279 | linking: helloworld2-0.1-0.noarch.rpm |
| 280 | linking: ln_raven_experiment_ravenconfig-0.2-0.noarch.rpm |
| 281 | signing: ln_raven_experiment.tpfile |
| 282 | signing: bbn_gusheval.stork.conf |
| 283 | repository: https://stork-repository.cs.arizona.edu/REPOAPI/ |
| 284 | uploading: ln_raven_experiment_ravenconfig-0.2-0.noarch.rpm |
| 285 | True |
| 286 | uploading: bbn_gusheval.27fcf8b05f7cbbedcd5ca6bd2ba63a683d779d5d.stork.conf |
| 287 | True |
| 288 | already-uploaded: helloworld2-0.1-0.noarch.rpm |
| 289 | already-uploaded: helloworld-1.0.tar.gz |
| 290 | uploading: ln_raven_experiment.27fcf8b05f7cbbedcd5ca6bd2ba63a683d779d5d.tpfile |
| 291 | True |
| 292 | }}} |
| 293 | |
| 294 | It is also possible to see stork tags in the PlanetLab Slice details: |
| 295 | |
| 296 | [[Image(2010-07-28_Provisioning-raven-tags.jpg)]] |
| 297 | |
| 298 | == OWL Support on SeattleGENI How-to == |
| 299 | |
| 300 | Pre-requisites: |
| 301 | 1. Get an account at [https://seattlegeni.cs.washington.edu/geni/ SeattleGENI] portal. |
| 302 | 2. Request key generation as part of the account creation. |
| 303 | 3. Download userid.privatekey and userid.publickey and place keys in seatlleGENI install directory. |
| 304 | 4. Login into [https://seattlegeni.cs.washington.edu/geni/html/login?next=/geni/html/profile Seattle GENI] |
| 305 | 5. Request resources at [https://seattlegeni.cs.washington.edu/geni/html/myvessels My Vessels] |
| 306 | |
| 307 | Once the above steps are completed you may install and use the Owl slice monitoring package. |
| 308 | |
| 309 | Installed the [http://raven.cs.arizona.edu/projects/project/raw-attachment/wiki/OwlSeattle/owl-seattle-1.0.tar.gz OwlSeattle package] as instructed in the [http://raven.cs.arizona.edu/projects/project/wiki/OwlSeattle Owl support for Seattle] page: |
| 310 | |
| 311 | 1. Unpacked the Owl package in the same directory of the MillionNode Seattle GENI. |
| 312 | 2. Used the repypp.py tool to create a preprocessed code file named owltest.pp.repy: |
| 313 | {{{ |
| 314 | $python ./repypp.py owltest.repy owltest.pp.repy |
| 315 | }}} |
| 316 | |
| 317 | Set up key for seash which supports the following options: |
| 318 | {{{ |
| 319 | A target can be either a host:port:vesselname, %ID, or a group name. |
| 320 | |
| 321 | on target [command] -- Runs a command on a target (or changes the default) |
| 322 | as keyname [command]-- Run a command using an identity (or changes the default). |
| 323 | add [target] [to group] -- Adds a target to a new or existing group |
| 324 | remove [target] [from group] -- Removes a target from a group |
| 325 | show -- Displays shell state (use 'help show' for more info) |
| 326 | set -- Changes the state of the targets (use 'help set') |
| 327 | browse -- Find vessels I can control |
| 328 | genkeys fn [len] [as identity] -- creates new pub / priv keys (default len=1024) |
| 329 | loadkeys fn [as identity] -- loads filename.publickey and filename.privatekey |
| 330 | list -- Update and display information about the vessels |
| 331 | upload localfn (remotefn) -- Upload a file |
| 332 | download remotefn (localfn) -- Download a file |
| 333 | delete remotefn -- Delete a file |
| 334 | reset -- Reset the vessel (clear the log and files and stop) |
| 335 | run file [args ...] -- Shortcut for upload a file and start |
| 336 | start file [args ...] -- Start an experiment |
| 337 | stop -- Stop an experiment |
| 338 | split resourcefn -- Split another vessel off of each vessel |
| 339 | join -- Join vessels on the same node |
| 340 | help [help | set | show ] -- help information |
| 341 | exit -- exits the shell |
| 342 | loadstate fn -- Load saved states from a local file. One must call 'loadkeys |
| 343 | username' and 'as username' first before loading the states, |
| 344 | so seash knows whose RSA keys to use in deciphering the state |
| 345 | file. |
| 346 | savestate fn -- Save the current state information to a local file. |
| 347 | |
| 348 | }}} |
| 349 | |
| 350 | In the following example, 3 vessels(nodes) had been requested from [https://seattlegeni.cs.washington.edu/geni/html/get_resources SeattleGENY->My Vessels] to run the owl script: |
| 351 | |
| 352 | [[Image(2010-07-23_seattleGENI-1.jpg)]] |
| 353 | |
| 354 | The Owl script can now be run: |
| 355 | |
| 356 | {{{ |
| 357 | $ python ./seash.py |
| 358 | !> genkeys lnevers |
| 359 | Created identity 'lnevers' |
| 360 | !> loadkeys lnevers |
| 361 | !> as lnevers |
| 362 | lnevers@ !> browse |
| 363 | ['130.161.40.154:1224', '133.9.81.166:1224', '203.178.143.10:1224', '152.3.138.5:1224', '208.117.131.116:1224', '133.9.81.164:1224', '206.117.37.9:1224', '149.169.227.129:1224', '203.30.39.243:1224'] |
| 364 | Added targets: %1(149.169.227.129:1224:v6), %3(203.30.39.243:1224:v8), %2(206.117.37.9:1224:v8) |
| 365 | Added group 'browsegood' with 3 targets |
| 366 | lnevers@ !> on browsegood |
| 367 | lnevers@browsegood !> run owltest.pp.repy |
| 368 | }}} |
| 369 | |
| 370 | Various details can be shown within the seash.py interface: |
| 371 | {{{ |
| 372 | lnevers@browsegood !> update |
| 373 | lnevers@browsegood !> show info |
| 374 | 149.169.227.129:1224:v6 {'nodekey': {'e': 65537L, 'n': 108700547230965815030281892518836265406880649144319241850548452379387629334687581413313579314495983534078661105889728950154775555961574530604793952431511451091235346680564399627840628247477370016596965218186643920418626097706670807761103530971073488453100773168860730126434481991452870333859427518267898057957L}, 'version': '0.1r', 'nodename': '149.169.227.129'} |
| 375 | 206.117.37.9:1224:v8 {'nodekey': {'e': 65537L, 'n': 111036927216391924654743705931909443359542286095079239170551986946721053435455525436183696732328791173792811118797954615349469544359991891203741636400610278508657796796074694320840320112973045053175565741280334677219128496811085304825114209253189982049737395182708371542628957434089203105688361256978110320889L}, 'version': '0.1r', 'nodename': '206.117.37.9'} |
| 376 | 203.30.39.243:1224:v8 {'nodekey': {'e': 65537L, 'n': 99870974570855030753474234333944335808140240269544700795681203201656260374719692200619947292539967608982021680255874385683566208312961321822579665850598146823642597999046217853884862228809734080873008121222523381717514460448115073333642563631881087440881392384369718908915671660586118345893265826964714853589L}, 'version': '0.1r', 'nodename': '203.30.39.243'} |
| 377 | |
| 378 | lnevers@browsegood !> list |
| 379 | ID Own Name Status Owner Information |
| 380 | %1 149.169.227.129:1224:v6 Started |
| 381 | %2 206.117.37.9:1224:v8 Started |
| 382 | %3 203.30.39.243:1224:v8 Started |
| 383 | |
| 384 | lnevers@browsegood !> show hostname |
| 385 | 149.169.227.129 is known as planetlab1.eas.asu.edu |
| 386 | 206.117.37.9 has unknown host information |
| 387 | 203.30.39.243 has unknown host information |
| 388 | lnevers@browsegood !> show location |
| 389 | %1(149.169.227.129): Tempe, AZ, United States |
| 390 | %2(206.117.37.9): Pasadena, CA, United States |
| 391 | %3(203.30.39.243): Singapore, Singapore |
| 392 | |
| 393 | lnevers@browsegood !> show files |
| 394 | Files on '206.117.37.9:1224:v8': 'owltest.pp.repy' |
| 395 | Files on '149.169.227.129:1224:v6': 'owltest.pp.repy' |
| 396 | Files on '203.30.39.243:1224:v8': 'owltest.pp.repy' |
| 397 | |
| 398 | lnevers@browsegood !> show log |
| 399 | Log from '206.117.37.9:1224:v8': |
| 400 | owl experiment start |
| 401 | calling update |
| 402 | calling update |
| 403 | calling update |
| 404 | calling update |
| 405 | |
| 406 | Log from '149.169.227.129:1224:v6': |
| 407 | owl experiment start |
| 408 | calling update |
| 409 | calling update |
| 410 | calling update |
| 411 | calling update |
| 412 | |
| 413 | Log from '203.30.39.243:1224:v8': |
| 414 | owl experiment start |
| 415 | calling update |
| 416 | calling update |
| 417 | calling update |
| 418 | calling update |
| 419 | }}} |
| 420 | |
| 421 | Once the experiment is running, you can view the Owl data collected at the [http://owl.cs.arizona.edu/owl_beta/ Owl Slice Monitoring Service] website. At this location you will be presented with a list of databases: |
| 422 | |
| 423 | [[Image(2010-07-23_Provisioning-raven-owl-1.jpg)]] |
| 424 | |
| 425 | The database to choose is "seattle": |
| 426 | |
| 427 | [[Image(2010-07-23_Provisioning-raven-owl-2.jpg)]] |
| 428 | |
| 429 | and if one of the vessels is chosen: |
| 430 | |
| 431 | [[Image(2010-07-23_Provisioning-raven-owl-3.jpg)]] |
| 432 | |
| 433 | |
| 434 | |
| 435 | |