Understanding the AM API using Named Data Networking

Image Map

4 Wait for resources to be ready

You can tell whether your nodes are ready by using a script built on omni called readyToLogin.

  1. Please use the command:
    readyToLogin -a AM_NICKNAME SLICE_NAME
    where (as before) AM_NICKNAME and  SLICE_NAME  are your aggregate 
    manager nickname and your slice name. 
  2. If it reports that the sliver is not yet ready (for example, it might say that the status is "changing"), then wait a minute or two and try again. Once everything is complete, readyToLogin will give output that should look something like this:
    rschr's geni_status is: ready (am_status:ready)  
    User example logs in to rschr using: 
            ssh  -p 32768 -i /Users/example/.ssh/geni_key_portal 
    User example logs in to collar using: 
            ssh -p 32769 -i /Users/example/.ssh/geni_key_portal 

5 Trying out the NDN application

In this experiment, you will be able to see in-network caching in action. Our experiment consists of the following nodes:

  • A data source node, called Custodian that holds data in the namespace /nytimes
  • A node, called Internet Router that forwards Interest and Data packets to and from the Custodian.
  • A node, called Campus Router that forwards Interest and Data packets to and from the university nodes.
  • A principal investigator node, called PI and a experimenter node, called Experimenter that will send Interest requests to the Custodian via UDP tunnels.

Once the topology is up, logon to the Custodian node and restart the NDN Forwarding Daemon.

$ cd /local
$ sudo ./

5.1 Run the NDN application on the entire topology

We are now ready to run our experiment.
On the Custodian node, start the producer application (note: you can try other namespaces as well)
The producer application will listen for Interest requests of a namespace -n and reply with Data packets.

$ sudo python /local/ -n /nytimes 

SSH to the Experimenter node, and start the consumer application

$ sudo python /local/ -u /nytimes/science

The Interest packet travels the entire topology, leaving breadcrumbs. The Data packet follows the breadcrumbs back to the consumer, leaving cached versions of the content. This is call in-network caching and it is one of the most important features in Information Centric Networking (ICN) You can check this phenomenon by running the same consumer application in the PI node. SSH to the PI node and start the consumer application

$ sudo python /local/ -u /nytimes/science

This time your PI node gets the content back, but nothing happens on the Custodian because the requested content is cached in the Campus Router node.
Note that the Data was retrieved much faster.
You can repeat the experiment with different namespaces

$ sudo python /local/ -u /nytimes/math

This time you see that the Interest request is served by the Custodian.
Feel free to explore different namespaces.

5.2 (Optional) Visualize experiment data flows

To use the GENI Desktop to visualize the data flows in your network, continue with the instructions here.


Next: Finish

Last modified 6 years ago Last modified on 07/05/16 11:28:08