Changes between Version 5 and Version 6 of GENIExperimenter/Tutorials/GettingStarted_PartII_Hadoop/Procedure/Execute


Ignore:
Timestamp:
03/07/14 15:07:28 (10 years ago)
Author:
sedwards@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIExperimenter/Tutorials/GettingStarted_PartII_Hadoop/Procedure/Execute

    v5 v6  
    230230
    231231
    232 == 4. Test the filesystem with a small file ==
    233 
    234 
    235 === A. Create a small test file ===
     232
     233=== 5.3 Test the filesystem with a small file ===
     234
     235
     236==== 5.3.1 Create a small test file ====
    236237{{{
    237238# echo Hello GENI World > hello.txt
    238239}}}
    239240
    240 === B. Push the file into the Hadoop filesystem ===
     241==== 5.3.2 Push the file into the Hadoop filesystem ===
    241242{{{
    242243# hadoop fs -put hello.txt hello.txt
    243244}}}
    244245
    245 === C. Check for the file's existence ===
     246==== 5.3.3 Check for the file's existence ===
    246247{{{
    247248# hadoop fs -ls
     
    250251}}}
    251252
    252 ===  D. Check the contents of the file ===
     253==== 5.3.4 Check the contents of the file ===
    253254{{{
    254255# hadoop fs -cat hello.txt
     
    256257}}}
    257258
    258 == 4.   Run the Hadoop Sort Testcase ==
     259=== 5.4   Run the Hadoop Sort Testcase ===
    259260
    260261 Test the true power of the Hadoop filesystem by creating and sorting a large random dataset.   It may be useful/interesting to login to the master and/or worker VMs and use tools like top, iotop, and iftop to observe the resource utilization on each of the VMs during the sort test.  Note: on these VMs iotop and iftop must be run as root.
    261262
    262 ===  A. Create a 1 GB random data set.   ===
     263==== 5.4.1 Create a 1 GB random data set.   ====
    263264
    264265After the data is created, use the ls functionally to confirm the data exists.  Note that the data is composed of several files in a directory.
     
    286287}}}
    287288
    288 === B. Sort the dataset. === 
     289==== 5.4.2 Sort the dataset. ==== 
    289290
    290291Note: you can use Hadoop's cat and/or get  functionally to look at the random and sorted files to confirm their size and that the sort actually worked.
     
    351352}}}
    352353
    353 
    354 == 5.   Advanced Example ==
     354=== 5.5   Advanced Example ===
    355355
    356356 Re-do the tutorial with a different number of workers, amount of bandwidth, and/or worker  instance types.  Warning:  be courteous to  other users and do not use too many of the resources.
    357357
    358 === A. Time the performance of runs with different resources.  ===
    359 === B. Observe largest size file you can create with different resources. ===
     358==== 5.5.1 Time the performance of runs with different resources.  ====
     359==== 5.5.2 Observe largest size file you can create with different resources. ====
    360360
    361361