Version 7 (modified by 8 years ago) (diff) | ,
---|
Understanding the AM API using Named Data Networking
4 Wait for resources to be ready
You can tell whether your nodes are ready by using a script built on omni
called readyToLogin
.
-
Please use the command:
readyToLogin -a AM_NICKNAME LabOne
where (as before) AM_NICKNAME and LabOne are your aggregate manager nickname and your slice name (both found on your worksheet).
- If it reports that the sliver is not yet ready (for example, it might say that the status is "changing"), then wait a minute
or two and try again. Once everything is complete, readyToLogin
will give output that should look something like this:
... rschr's geni_status is: ready (am_status:ready) User example logs in to rschr using: ssh -p 32768 -i /Users/example/.ssh/geni_key_portal example@pc1.utah.geniracks.net User example logs in to collar using: ssh -p 32769 -i /Users/example/.ssh/geni_key_portal example@pc1.utah.geniracks.net ...
5 Trying out the NDN application
In this experiment, you will be able to see the NDN in-network caching in action. Our experiment consists of the following nodes:
- A data source node, called
Custodian
that holds data in the namespace /nytimes - A internet router node, called
Internet
that forwards Interest and Data packets to and from the custodian. - A university hub node, called
University
that forwards Interest and Data packets to and from the university nodes. - A principal investigator node, called
PI
and a experimenter node, calledExperimenter
that will issue Interest requests.
Download the scripts and Python codes to your computer and extract it.
Or alternatively,
$ wget http://192.1.242.151/files/ndn-tutorial.gz $ tar -xvf ndn-tutorial.gz
In the ndn-tutorial-config.sh configuration file, edit the fields according to your GENI username, SSH key, GENI Aggregate name, pc and port numbers. Keep the quotes format unchanged, otherwise the script may not run.
Run copy-scripts.sh, this will automatically transfer the necessary files to the nodes in our topology.
Login into the node Custodian and start the NDN Forwarding Daemon (NFD),
$ nfd-start
wait until the shell prompt returns (~ a few seconds). The NFD is now up and running.
The install
and execute
services requested in our RSpec have
already started, and nodes in our experiment should be running the CCN (Content Centric Networking) protocol. Our experiment consists of:
- A data source (node
dsrc1
that holds precipitation data from the US National Oceanic and Atmospheric Administration (NOAA). - A researcher node
rsrchr
that gets data from the data source - A collaborator node
collab
that gets data from the researcher
Key features of the CCN protocol include:
- Data is accessed by name. In our case we use a program called client to get precipitation data by date range (e.g. precipitation between 1902/01/01 and 1902/01/02).
- All nodes cache data for a certain period of time. When a node receives a request for data, it checks its local cache. If the data is in it's cache, it returns that data. Otherwise, it forwards it on to its neighbor.
We verify this caching behavior by:
- Logging into the researcher node and using the client program to get precipitation data for a certain date range. The client displays how long it took to get the data.
- Retrieving the same data again and noting how we get it much faster since it comes out of a cache.
- Requesting data for different date ranges and seeing how long it took to retrieve the data.
- Requesting the data again and note it is retrieved much faster.
If you have time, you can repeat the above steps on the collaborator node.
Note: There is an optional part to this exercise that uses the GENI Desktop to visualize traffic on the links in our network. There you can visualize which data requests went all the way to the data source (node dsrc1
) and which data requests were fulfilled from a node's cache.
5.1 Run the CCN application
- Log into the node
rsrchr
using thessh
command returned byreadyToLogin
. - Once you are logged in, ask for precipitation data from 1 Jan 1902 to 2 Jan 1902:
$ /opt/ccnx-atmos/client.py Start Date in YYYY/MM/DD? 1902/01/01 End Date in YYYY/MM/DD? 1902/01/02
- You should see output that looks like:
Asking for /ndn/colostate.edu/netsec/pr_1902/01/01/00, Saving to pr_1902_01_01.tmp.nc Time for pr_1902_01_01.tmp.nc 1.09802699089= Asking for /ndn/colostate.edu/netsec/pr_1902/01/02/00, Saving to pr_1902_01_02.tmp.nc Time for pr_1902_01_02.tmp.nc 4.65998315811= Joining files.. Concat + write time 0.0735998153687 Wrote to pr_1902_1_1_1902_1_2.nc
Note that it took about 1.1 and 4.7 seconds respectively to retrieve data for Jan 1 and Jan 2
- Run the client again and request the same data. This time your output should look like:
Asking for /ndn/colostate.edu/netsec/pr_1902/01/01/00, Saving to pr_1902_01_01.tmp.nc Time for pr_1902_01_01.tmp.nc 0.0423700809479= Asking for /ndn/colostate.edu/netsec/pr_1902/01/02/00, Saving to pr_1902_01_02.tmp.nc Time for pr_1902_01_02.tmp.nc 0.0388598442078= Joining files.. Concat + write time 0.0237510204315 Wrote to pr_1902_1_1_1902_1_2.nc
Notice how much faster the data was retrieved this time.
- If time permits, log into the collaborator node
collab
and run queries from there. (Pick dates in January of 1902.) Notice different data retrieval times depending on whether the data came from the datasource, the cache atrsrchr
, or the local cache.
5.2 (Optional) Visualize experiment data flows
To use the GENI Desktop to visualize the data flows in your network, continue with the instructions here.