112 | | Note that the ''ccndc'' command takes a content path (ccnx:/ndn) and binds it to a network path (in this case, the host ''router'', which is a valid hostname on the experiment topology, resolving to an IP address on the ''router'' machine). It can also be used to delete routes with the ''del'' subcommand. |
113 | | - '''Question 1.1 A: ''' [[BR]] |
114 | | From ''researcher2'', fetch the same data again (from 1902/01/01 to 1902/01/02), and record the fetch times reported by ''client.py''. It prints out the time take to pull each temporary file along with the concatenation and write time. Then fetch 1902/02/03 to 1902/02/04, and record those fetch times. Fetch 1902/02/03 to 1902/02/04 a second time, and record the new times. Which transfer was longest, and which was shortest? Knowing that each ccnd caches data for a short period of time, can you explain this behavior? |
115 | | Outputs: |
| 112 | Note that the ''ccndc'' command takes a content path (ccnx:/ndn) and binds it to a network path (in this case, the host ''router'', which is a valid hostname on the experiment topology, resolving to an IP address on the ''router'' machine). It can also be used to delete routes with the ''del'' subcommand. |
| 113 | - '''Question 1.1 A: ''' [[BR]] |
| 114 | From ''researcher2'', fetch the same data again (from 1902/01/01 to 1902/01/02), and record the fetch times reported by ''client.py''. It prints out the time take to pull each temporary file along with the concatenation and write time. Then fetch 1902/02/03 to 1902/02/04, and record those fetch times. Fetch 1902/02/03 to 1902/02/04 a second time, and record the new times. Which transfer was longest, and which was shortest? Knowing that each ccnd caches data for a short period of time, can you explain this behavior? |
| 115 | Outputs: |
153 | | '''Answer:''' The first time fetching data from 2/3 to 2/4 took the longest, the 2nd time fetching data from 2/3 to 2/4 took the shortest time. [[BR]] |
154 | | Since ccnd caches data for a short period of time, the 2nd time researcher2 fetches data from 2/3 to 2/4, it fetches the data from local cache. Since researcher1 already caches data from 1/1 to 1/2, the `router` caches that data. As a result, researcher2 got the data from 1/1 to 1/2 from router cache. |
155 | | - '''Question 1.1 B:''' [[BR]] |
156 | | Browse the content caches and interests seen on various hosts in the network by loading their ccnd status page on TCP port 9695 in your browser (see Section 5, Hints, below). Which hosts have seen interests and have content cached, and why? [[BR]] |
157 | | '''Answer:''' datasource1, datasource2, router, researcher1 and researcher2 have seen interests and content cached. Since the data has passed them. |
158 | | - '''Task 1.2: Intent Propagation''' [[BR]] |
159 | | ''This task will demonstrate the behavior of Intent filtering during propagation. ''[[BR]] |
160 | | Log into the machine ''datasource1'' and run the command ''tail -f /tmp/atmos-server.log''. Next, run ''/opt/ccnx-atmos/client.py'' on both of the hosts researcher1 and researcher2. Enter 1902/01/15 as both start and end date on both hosts, and press <ENTER> on both hosts as close to the same time as possible. |
161 | | - '''Question 1.2 A:''' [[BR]] |
162 | | What intents show up at datasource1 for this request? (Use the timestamps in the server log to correlate requests with specific intents.) Will you observe unique or duplicate intents when both researcher1 and researcher2 fetch the same data at precisely the same time? Why? [[BR]] |
163 | | '''Answer:''' I see unique intents when both researcher1 and researcher2 fetch the same data at the same time. ccdn is able to intelligently identify duplicated intent and effectively filter it during propagation. |
| 153 | '''Answer:''' The first time fetching data from 2/3 to 2/4 took the longest, the 2nd time fetching data from 2/3 to 2/4 took the shortest time. [[BR]] |
| 154 | Since ccnd caches data for a short period of time, the 2nd time researcher2 fetches data from 2/3 to 2/4, it fetches the data from local cache. Since researcher1 already caches data from 1/1 to 1/2, the `router` caches that data. As a result, researcher2 got the data from 1/1 to 1/2 from router cache. |
| 155 | - '''Question 1.1 B:''' [[BR]] |
| 156 | Browse the content caches and interests seen on various hosts in the network by loading their ccnd status page on TCP port 9695 in your browser (see Section 5, Hints, below). Which hosts have seen interests and have content cached, and why? [[BR]] |
| 157 | '''Answer:''' datasource1, datasource2, router, researcher1 and researcher2 have seen interests and content cached. Since the data has passed them. |
| 158 | - '''Task 1.2: Intent Propagation''' [[BR]] |
| 159 | ''This task will demonstrate the behavior of Intent filtering during propagation. ''[[BR]] |
| 160 | Log into the machine ''datasource1'' and run the command ''tail -f /tmp/atmos-server.log''. Next, run ''/opt/ccnx-atmos/client.py'' on both of the hosts researcher1 and researcher2. Enter 1902/01/15 as both start and end date on both hosts, and press <ENTER> on both hosts as close to the same time as possible. |
| 161 | - '''Question 1.2 A:''' [[BR]] |
| 162 | What intents show up at datasource1 for this request? (Use the timestamps in the server log to correlate requests with specific intents.) Will you observe unique or duplicate intents when both researcher1 and researcher2 fetch the same data at precisely the same time? Why? [[BR]] |
| 163 | '''Answer:''' I see unique intents when both researcher1 and researcher2 fetch the same data at the same time. ccdn is able to intelligently identify duplicated intent and effectively filter it during propagation. |
202 | | - '''Question 3.1 A''': [[BR]] |
203 | | From the host consumer run ''/opt/ccnx-atmos/client.py'' to fetch data from 1902/01/21 through 1902/01/24, and time the total transaction. Within 60 seconds, fetch the same data again, and time it. After 60 seconds (but before 300 seconds have passed), fetch it a third time, and time that. What is the benefit of local caching (the second fetch)? Is there perceptible benefit from server-side caching (the third fetch) when data takes some time to generate? [[BR]] |
| 202 | - '''Question 3.1 A''': [[BR]] |
| 203 | From the host consumer run ''/opt/ccnx-atmos/client.py'' to fetch data from 1902/01/21 through 1902/01/24, and time the total transaction. Within 60 seconds, fetch the same data again, and time it. After 60 seconds (but before 300 seconds have passed), fetch it a third time, and time that. What is the benefit of local caching (the second fetch)? Is there perceptible benefit from server-side caching (the third fetch) when data takes some time to generate? [[BR]] |
| 204 | '''Answers:''' First time fetching data: |
| 205 | {{{ |
| 206 | Start Date in YYYY/MM/DD? 1902/01/21 |
| 207 | End Date in YYYY/MM/DD? 1902/01/24 |
| 208 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/21/00, Saving to pr_1902_01_21.tmp.nc |
| 209 | Time for pr_1902_01_21.tmp.nc 5.30908107758= |
| 210 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/22/00, Saving to pr_1902_01_22.tmp.nc |
| 211 | Time for pr_1902_01_22.tmp.nc 9.64774513245= |
| 212 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/23/00, Saving to pr_1902_01_23.tmp.nc |
| 213 | Time for pr_1902_01_23.tmp.nc 9.3288269043= |
| 214 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/24/00, Saving to pr_1902_01_24.tmp.nc |
| 215 | Time for pr_1902_01_24.tmp.nc 9.40179896355= |
| 216 | Joining files.. |
| 217 | Concat + write time 0.141476154327 |
| 218 | Wrote to pr_1902_1_21_1902_1_24.nc |
| 219 | }}} |
| 220 | Within 60 seconds, ask again: |
| 221 | {{{ |
| 222 | Start Date in YYYY/MM/DD? 1902/01/21 |
| 223 | End Date in YYYY/MM/DD? 1902/01/24 |
| 224 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/21/00, Saving to pr_1902_01_21.tmp.nc |
| 225 | Time for pr_1902_01_21.tmp.nc 0.168676137924= |
| 226 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/22/00, Saving to pr_1902_01_22.tmp.nc |
| 227 | Time for pr_1902_01_22.tmp.nc 0.156650066376= |
| 228 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/23/00, Saving to pr_1902_01_23.tmp.nc |
| 229 | Time for pr_1902_01_23.tmp.nc 0.159749031067= |
| 230 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/24/00, Saving to pr_1902_01_24.tmp.nc |
| 231 | Time for pr_1902_01_24.tmp.nc 0.154557228088= |
| 232 | Joining files.. |
| 233 | Concat + write time 0.135298967361 |
| 234 | Wrote to pr_1902_1_21_1902_1_24.nc |
| 235 | }}} |
| 236 | Between 60 seconds and 300 seconds, ask for the third time: |
| 237 | {{{ |
| 238 | Start Date in YYYY/MM/DD? 1902/01/21 |
| 239 | End Date in YYYY/MM/DD? 1902/01/24 |
| 240 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/21/00, Saving to pr_1902_01_21.tmp.nc |
| 241 | Time for pr_1902_01_21.tmp.nc 4.19592308998= |
| 242 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/22/00, Saving to pr_1902_01_22.tmp.nc |
| 243 | Time for pr_1902_01_22.tmp.nc 4.18822002411= |
| 244 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/23/00, Saving to pr_1902_01_23.tmp.nc |
| 245 | Time for pr_1902_01_23.tmp.nc 4.19812393188= |
| 246 | Asking for /ndn/colostate.edu/netsec/pr_1902/01/24/00, Saving to pr_1902_01_24.tmp.nc |
| 247 | Time for pr_1902_01_24.tmp.nc 4.18877005577= |
| 248 | Joining files.. |
| 249 | Concat + write time 0.136485099792 |
| 250 | Wrote to pr_1902_1_21_1902_1_24.nc |
| 251 | }}} |
| 252 | Apparently local caching gives the best performance since the data is `local`. However, local cache can be out-of-date, that's why we typically use a small valid time range for local caches. Server side caching keeps the data `alive` in memory so the server does not need to generate the data and/or read from the hard disk every time that a client asks for the data (especially the same data) -- which, of course, helps to boost the performance too, as you can find from the above output. |