wiki:GENIExperimenter/Tutorials/jacks/HadoopInASlice/ObtainResources

Version 10 (modified by pruth@renci.org, 5 years ago) (diff)

--

Hadoop in a Slice

Part I: Obtain Resources: create a slice and reserve resources

Image Map

Instructions

1. Create a slice

  1. Login to https://portal.geni.net
  2. Create a new slice in one of your projects
  3. Launch Jacks

2. Bind the Slice

  1. Click the "Site 0" in the Jacks window
  2. Select an ExoGENI site (If you are participating in an organized tutorial, please bind the VMs to the rack assigned to you)
Bind the Slice

3. Create the Hadoop Master

  1. Add an ExoGENI VM to your slice.
Add the Hadoop Master
  1. Select the VM to set its properties:
    1. Name: hadoop-master
    2. Node Type: ExoGENI Medium
    3. Set the Master Name
    4. Custom Disk Image Name: http://geni-images.renci.org/images/GENIWinterCamp/images/gwc-hadoop.v0.4a.xml
    5. Disk Image Version: 16ff128df4cf10f2472a8d20796146bcd5a5ddc3
    6. Set the Master Image
    7. Add an Install Script:
      URL: http://geni-images.renci.org/images/GENIWinterCamp/master.sh
      Path: /tmp
    8. Set the Master Install Script
    9. Add an execute service to execute the script at boot time:
      chmod +x /tmp/master.sh; /tmp/master.sh
    10. Set the Master Execute Scripts

3. Create the Hadoop Workers

  1. Add a 2 more VMs to the same rack as the first VM.
Add the Hadoop Master
  1. Edit each worker’s attributes
    1. Names: hadoop-worker-0 and hadoop-worker-1
    2. Sliver Type: XOMedium
    3. Disk Image Name: http://geni-images.renci.org/images/GENIWinterCamp/images/gwc-hadoop.v0.4a.xml
    4. Disk Image Version: 16ff128df4cf10f2472a8d20796146bcd5a5ddc3
    5. Add an install service to install following script in /tmp
      http://geni-images.renci.org/images/GENIWinterCamp/worker.sh
    6. Add an execute service to execute the script at boot time. For each VM, substitute the VM’s name for where the following uses “hadoop-worker-0”. (Note: the following should be placed on one line)
      chmod +x /tmp/worker.sh; /tmp/worker.sh $hadoop-master.Name() $hadoop-master.IP("link0") $hadoop-worker-0.Name() $hadoop-worker-0.IP("link0")
Add the Hadoop Master

4. Create the Network

  1. Link all three VMs with a broadcast network
Add the Hadoop Master
  1. Click the i button on the link to edit its properties.
    1. Name the link link0
    2. Under the interfaces tab add IP addresses and subnets to each VM’s interface. hadoop-master: 172.16.1.1 and 255.255.255.0 hadoop-worker-0: 172.16.1.10 and 255.255.255.0 hadoop-worker-1: 172.16.1.11 and 255.255.255.0
Add the Hadoop Master
    1. Switch to the properties tab and set the link’s bandwidth 100Mb/s Set capacity: 100000000 (note: Flack will report that this is 100Gb/s but it is actually 100Mb/s)
Add the Hadoop Master

5. Instantiate the Slice

  1. Submit the request
  2. Wait until the slice is up
Add the Hadoop Master

Introduction

Next: Execute the Hadoop Experiment

Attachments (24)