Changes between Version 1 and Version 2 of GECDemoSession/GEC12/DemoInfo


Ignore:
Timestamp:
11/16/11 12:36:11 (12 years ago)
Author:
Josh Smift
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GECDemoSession/GEC12/DemoInfo

    v1 v2  
    1 This page will have a summary of the demos, materials, etc -- stay tuned.
     1There were 41 demos and posters at the evening demo session at GEC 12.
     2
     3= ARP security in ProtoGENI =
     4
     5Demo participants: Dawei Li, Xiaoyan Hong (University of Alabama)
     6
     7The demo will show ARP attacks and its harm to ProtoGENI (on reserved nodes), and show potential defenses.
     8
     9= CRON =
     10
     11Demo participants: Seung-Jong Park (Louisiana State University)
     12
     13CRON will demonstrate how large scale computational biology applications can be launched with !MapReduce over multiple Eucalyptus cloud clusters connected through the Internet2 ION service. For demonstration, we will connect two Eucalyptus clusters created at testbeds of LSU CRON site and MAX site.
     14
     15= Davis Social Links =
     16
     17Demo participants: Felix Wu (UC Davis)
     18
     19A relevant feature of online social networks like Facebook is the scope for users to share external information from the web with their friends by sharing an URL. The phenomenon of sharing has bridged the web graph with the social network graph and the shared knowledge in ego networks has become a source for relevant information for an individual user, leading to the emergence of social search as a powerful tool for information retrieval. Consideration of the social context has become an essential factor in the process of ranking results in response to queries in social search engines. In this demo, we present !InfoSearch, a social search engine built over the Facebook platform, which lets users search for information based on what their friends have shared. We identify and implement three distinct ranking factors based on the number of mutual friends, social group membership, and time stamp of shared documents to rank results for user searches. We perform user studies based on the Facebook feeds of two authors to understand the impact of each ranking factor on the result for two queries.
     20
     21= Enterprise Centric Offloading System (ECOS) =
     22
     23Demo participants: Aaron Gember (University of Wisconsin - Madison)
     24
     25Our Enterprise Centric Offloading System (ECOS) is designed to address two key requirements of enterprise settings that existing mobile application offloading systems fail to address: data privacy and resource scheduling. ECOS addresses this issues by identifying the privacy level of offloaded application state, limiting offloading to trusted resources, and multiplexing offloading requests from many devices with diverse goals to a range of compute resources. This demo focuses on ECOS's ability to preserve privacy while ensuring offloading offers latency improvements or energy savings.
     26
     27= GENICloud =
     28
     29Demo participants: Jessica Ann Blaine and Rick !McGeer (HP Labs), Andy Bavier (Princeton University)
     30
     31Demonstration of a persistent Cloud infrastructure over multiple sites and continents, tied to the PlanetLab Control Framework.
     32
     33= GMOC =
     34
     35Demo participants: Camilo Viecco (Indiana University)
     36
     37GMOC operational frontend, Measumrenent and monitoring portals.
     38
     39= GpENI =
     40
     41Demo participants: Deep Medhi (University of Missouri - Kansas City)
     42
     43This demo will present the current status of GpENI project, in particular, we will demonstrate federation capability.
     44
     45= HiveMind =
     46
     47Demo participants: Steven Templeton Affiliation (University of California, Davis)
     48
     49We demonstrate a swarm intelligence inspired, decentralized, light-weight, autonomous security monitoring system run on the DETER testbed using the Benito virtualization framework. For this demo, using slices of up to 640 nodes, we execute controlled attacks for the HiveMind system to detect, for example when the slice is used to launch distributed denial-of-service attacks against an internet host.
     50
     51= iGENI =
     52
     53Demo participants: Jim Chen and Joe Mambretti (Northwestern University)
     54
     55Enhanced iGENI infrastructure, Advanced Programmable Network Exchange, in partnership with Cluster D projects, ORCA, GENICloud and iGENI international partners.
     56
     57= IMF =
     58
     59Demo participants: Rudra Dutta (North Carolina State University)
     60
     61The IMF demo will demonstrate the perfSONAR functionality integrated into IMF since GEC11 in Sprial 3, and/or the planned first part of Spiral 4 functionality, a skeleton proposed GENI I&M messaging service.
     62
     63= I&M MDOD experimenter use case demo =
     64
     65Demo participants: Deniz Gurkan (University of Houston; Rich Kagan (Infoblox Inc)
     66
     67The UH-Infoblox team will demonstrate the application of IF-MAP for I&M service access and control. An IF-MAP server will be setup to demonstrate the event flow of MDOD creation and measurement data exchange while a MAP server has been updated using the IF-MAP protocol.
     68
     69For more information on IF-MAP protocol, please see: http://www.trustedcomputinggroup.org/resources/tnc_ifmap_binding_for_soap_specification
     70
     71This is a part of a project funded by the Infoblox Inc. Project goal is the introduction of IF-MAP as an open resource information sharing/exchange database protocol to the GENI community.
     72
     73= Infinity =
     74
     75Demo participants: Yudong Gao (University of Michigan)
     76
     77Infinity: an energy-efficient data delivery infrastructure for mobile devices.
     78
     79= K-GENI =
     80
     81Demo participants: Myung Ki Shin (ETRI)
     82
     83Our demo will consist of two parts: Part 1 - ETRI will show virtualized programmable network platform and its own control framework. We use various applications to show control of multiple virtual nodes and networks and also present platform's newest features such as dynamic control of CPU and bandwidth resources, and Linux-based data plane virtualization technique. Also, we will introduce a new UI and open platform, named Panto, which enables for researchers to create end-to-end slices based on Slice-based architecture (SFA) over federated resources and networks (K-GENI). Part 2 - KISTI will show recent updates on the K-GENI testbed deployment, and will perform an international demonstration for the federated network operations over K-GENI testbed. The federated network operations demo will present three efforts: 1) operational data sharing between GMOC and dvNOC, 2) back-end development of dvNOC, featuring federation-only core schema implementation and push & pull functionality with openAPIs, 3) newly designed & developed UI for UoVN (User oriented Virtual Network) monitoring and management of dvNOC.
     84
     85= LEARN-ORCA cluster =
     86
     87Demo participants: Deniz Gurkan (University of Houston)
     88
     89LEARN ORCA cluster capability demo: obtain a slice within the ORCA Cluster using resources stitched from ORCA sites at UH and RENCI, connected by NLR backbone, using GENI AM API, and run an application that exercises all resources.
     90
     91= Measurement Data Archive =
     92
     93Demo participants: Giridhar Manepalli (Corporation for National Research Initiatives)
     94
     95Corporation for National Research Initiatives (CNRI) will be demonstrating the functionality of the Measurement Data Archive prototype, which is implemented using the Digital Object Architecture.
     96
     97The Measurement Data Archive prototype system consists of two components: 1) User Workspace and 2) Object Archive. The User Workspace component is an entry point for users (e.g., experimenters, instrumentation researchers, etc.) to store and transfer measurement data,
     98= !MySlice =
     99
     100Demo participants: Panayotis Antoniadis (UPMC Sorbonne Universit�s); Andy Bavier (Princeton University); Aki Nakao (University of Tokyo)
     101
     102We will demonstrate a web-based resource management tool called !MySlice, which makes it easy to list, filter and attach resources made available through PlanetLab's SFA control framework,  annotated with useful information from different monitoring sources (e.g., reliability and utilization over time, geographic and network location, and more).
     103
     104A couple of nice features of !MySlice are:
     105
     106 * The way in which !MySlice uses SFA's delegation capabilities to permit an SFA client to run on a remote webserver.
     107 * The measurement-driven way in which resources can be selected by the user.
     108 * The fact that elements of !MySlice are already in use as part of the standard PlanetLab web interface, used by hundreds of users.
     109
     110This work is a product of the international cooperation between Princeton University, the University of Tokyo, and UPMC Sorbonne Universites, funded through GENI's "Understanding Federation" grant. It takes place in the context of the global PlanetLab federation involving PlanetLab Central, PlanetLab Europe, and PlanetLab Japan.
     111
     112This work has also received support from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n�224263 - !OneLab2.
     113
     114See http://trac.myslice.info for more information.
     115which could be in a variety of forms (e.g., formatted datasets, raw files, etc.). Data and metadata files managed in the user workspace can be archived for long-term storage in an Object Archive. Once data is archived, a persistent and unique identifier is created.
     116
     117= !NetServ on OpenFlow =
     118
     119Demo participants: Emanuele Maccherani (Columbia University)
     120
     121A poster on the on-going work on integrating !NetServ and OpenFlow.
     122
     123= OFCLEM: Steroid OpenFlow Service =
     124
     125Demo participants: Aaron Rosen, KC Wang, and Dan Schmiedt (Clemson University)
     126
     127In a software defined network (SDN), packet forwarding methods can be dynamically changed by software controllers on the fly to provide additional services and enhancements. These services can be seamlessly integrated into a network without the painful task of installing and configuring software on each machine that wants to use the services. The demo demonstrates Steroid OpenFlow Service (SOS), a service that builds on top of OpenFlow and network agents to seamlessly optimize TCP throughput with a multitude of parallel sockets over multiple paths between the source and destination sites.
     128
     129= OFIU: !FlowScale =
     130
     131Demo participants: Chris Small (Indiana University)
     132
     133We are planning on demonstrating !FlowScale, a load balancing-as-a-service tool to allow distribution of network traffic over multiple switches and ports. !FlowScale tool uses multiple user chosen algorithms to hash traffic by IP prefix, VLAN or Ethertype and balances traffic based on these selected characteristics. Indiana University is deploying the !FlowScale software into production as part of a high performance IDS cluster. It is intended that applications such as !FlowScale deployed into production networks will foster deployment of technologies such as OpenFlow that can be used by GENI for research.
     134
     135= OFUWI: Network Coding =
     136
     137Demo participants: Nairan Zhang (University of Wisconsin - Madison)
     138
     139Our demo in GEC 12 will focus on delivering our progress on Network Coding project. At this time, our encoder is able to combine multiple video streams and the decoder later separates them. Video streams, could be potentially provided by accessing two GENI WiMAX nodes located one in Madison WI and the other one in Polytechnic Institute of New York University, Brooklyn NY. This GENI experiment integrates NetFPGA cards as key component in our high performance router. We will present a poster with our current results.
     140
     141= OnTimeMeasure =
     142
     143Demo participants: Prasad Calyam (Ohio Supercomputer Center)
     144
     145We will demonstrate various I&M capabilities of OnTimeMeasure software/service available for GENI experimenters. We will present two GENI experiment case studies: "Resource allocation in virtual desktop clouds" led by The Ohio State University, and "Emulating cloud dynamics for performance sensitive applications" led by Purdue University.
     146
     147= OpenFlow at UMass Lowell =
     148
     149Demo participants: Yan Luo (UMass Lowell)
     150
     151Demonstration of initial deployment of OpenFlow switch on UMass Lowell campus network.
     152
     153= ORCA =
     154
     155Demo participants: Ilia Baldin (RENCI)
     156
     157ORCA project will demonstrate the new features of the ORCA framework - topology embedding, ProtoGENI interoperability, Attribute-based Authorization Control, OpenFlow integration.
     158
     159= ProtoGENI / Flack / INSTTOOLS =
     160
     161Demo participants: Rob Ricci (University of Utah)
     162
     163We will do a demonstration of creating slices on ProtoGENI and PlanetLab resources using the Flack interface, and will demonstrate instrumentizing the slices with the Kentucky INSTTOOLS software. This demo will cover the same material (in condensed form) as our tutorial earlier in the day, so it will be a good opportunity for those who cannot attend the tutorial to see it.
     164
     165= S3I =
     166
     167Demo participants: Lokesh Mandvekar (SUNY Buffalo)
     168
     169Social networking and user ranking features for SSI creation.
     170
     171= Secure Content Centric Mobile Network =
     172
     173Demo participants: Mooi Chuah (Lehigh University)
     174
     175Secure Content Centric Mobile Network.
     176
     177= SecureUpdates =
     178
     179Demo participants: Justin Cappos
     180
     181This demo describes an easy way to secure software update systems using the TUF project. TUF has a number of critical features that are important to developers distributing software including: pushing out automatic updates, key revocation, and multi-party signing / trust. TUF is already being used by a new system (PrimoGENI) and you have demonstrated how it can be used by a large, mature system (PlanetLab) to securely distribute updates. The demo will show how easy it is to integrate TUF into your code.
     182
     183= Serval =
     184
     185Demo participants: Erik Nordstrom and Mike Freedman (Princeton University)
     186
     187Modern Internet services operate under unprecedented multiplicity (in service replicas, host interfaces, and network paths) and dynamism (due to replica failure and recovery, service migration, and client mobility). Yet, today's end-host network stack still offers the decades-old host-centric communication abstraction that binds a service to a fixed IP address and port tuple.
     188
     189The better accomodate modern Internet services, our new Serval architecture introduces a service-centric end-host network stack that makes services easier to scale, more robust to churn, and adaptable to a diverse set of deployment scenarios. A key abstraction of our stack is service-level anycast with connection affinity, provided by a new service access layer that sits between the network and transport layers. We will demonstrate the use of Serval for load balancing and server selection, connection migration, and physical mobility, as well as describe how users can download, install, and use the Serval end-host stack.
     190
     191= SeRViTR =
     192
     193Demo participants: Tianyi Xing (Arizon State University)
     194
     195SeRViTR is a cloud-based network resource provisioning infrastructure. Users can subscribe network resource based on their application requirements. A variety of applications can be deployed upon the flexible SeRViTR system, e.g., cross domain data sharing, virtual network provisioning, etc. The demo will focus on the data sharing and security services provided for network service users. We plan to incorporate SeRViTR into existing GENI's service domain as an aggregate to provide efficient and secure network provisioning to end users.
     196
     197= !ShadowNet =
     198
     199Demo participants: James Griffioen, Zongming Fei, and Hussamuddin Nasir (University of Kentucky, Laboratory for Advanced Networking)
     200
     201We will demonstrate how to create a ProtoGENI experiment with Juniper logical routers.
     202
     203We will also demonstrate how to collect and display measurement data from Juniper routers.
     204
     205= TIED ABAC =
     206
     207Demo participants: Ted Faber (ISI)
     208
     209Integrated ABAC authorization across teo control frameworks
     210
     211= Trema OpenFlow Controller =
     212
     213Demo participants: Hideyuki Shimonishi (NEC)
     214
     215Trema is an open source OpenFlow development environment where developers can code OpenFlow controllers.
     216
     217= UEN GENI =
     218
     219Demo participants: Joe Breen (University of Utah)
     220
     221The UEN GENI project will utilize the existing and emerging infrastructure capabilities of both the statewide Utah Education Network (UEN) and its project home, the University of Utah, to deploy a more widely distributed GENI testbed capability within the state of Utah. One key objective of UEN GENI will be to integrate GENI node co-location requirements into the design of the new off-campus data center that the University of Utah is developing near downtown Salt Lake City and where UEN is planning to relocate its primary node. In addition, GENI network requirements will be incorporated into the design of the Research@UEN optical network now under development in northern Utah to link the state's three research universities with the national research networks, which maintain primary nodes in the same telecommunications facility in Salt Lake City. This initiative will deploy GENI racks and switches supporting both the protoGENI and PlanetLab frameworks in addition to OpenFlow virtual switching functionality initially within University of Utah and UEN data centers. In particular, the statewide reach of UEN will enable the consideration of an in-state high school for subsequent GENI rack co-location. This step will allow for a set of talented high school students and their teachers to gain a first-hand feel for the GENI testbed and its research capabilities.
     222
     223= VMI-FED =
     224
     225Demo participants: Brian Hay (University of Alaska)
     226
     227We will demonstrate the use of the Alaska ORCA resources, which provide experimenters with access to low bandwith and/or high latency connections in Alaska.
     228
     229= WiMAX at BBN =
     230
     231Demo participants: Manu Gosain (GENI Project Office, Raytheon BBN Technologies)
     232
     233WiMAX at BBN.
     234
     235= WiMAX at CU Boulder =
     236
     237Demo participants: Caleb Phillips and Dirk Grunwald (University of Colorado Boulder)
     238
     239We will present recent results in measurement and mapping of wireless coverage of the CU GENI WiMAX node. This work focuses on a new mapping technique that incorporates spatial modeling and geostatistical interpolation with careful placement and collection of samples (measurements). We will have a poster explaining our methods and a laptop-based demo showing coverage maps overlayed on google earth.
     240
     241= WiMAX at NYU Poly =
     242
     243Demo participants: Fraida Fund and Thanasis Korakis (Polytechnic Institute of NYU)
     244
     245We present as a case study a measurement study of mobile wireless Internet application QoE in a dense urban environment, conducted over the GENI WiMAX mesoscale deployment at NYU-Poly. This study serves to highlight the capability and versatility of the WiMAX deployment. We explain how we use the WiMAX resources at NYU-Poly and the OMF testbed framework to efficiently gather mobile wireless measurements over a wide area.
     246
     247= WiMAX at UCLA =
     248
     249Demo participants: Mario Gerla and Chien-Chia Chen (UCLA)
     250
     251The poster will describe the target multi homing environment and the use of network coding.
     252
     253= WiMAX at UW - Madison =
     254
     255Demo participants: Derek Meyer and Suman Banerjee (Wisconsin Wireless and NetworkinG Systems (WiNGS) laboratory, Cisco Systems)
     256
     257!WiRover Demo.
     258
     259= XIA =
     260
     261Demo participants: Suk-Bok Lee and Peter Steenkiste (Carnegie Mellon University)
     262
     263EXpressive Internet Architecture (XIA) is an NSF funded project part of the Future Internet Architecure initiative. XIA addresses the growing diversity of network use models, the need for trustworthy communication, and the growing set of stakeholders who coordinate their activities to provide Internet services. XIA addresses these needs by exploring the technical challenges in creating a single network that offers inherent support for communication between current communicating principals--including hosts, content, and services--while accommodating unknown future entities. For each type of principal, XIA defines a narrow waist that dictates the application programming interface (API) for communication and the network communication mechanisms. XIA provides intrinsic security in which the integrity and authenticity of communication is guaranteed. XIA enables flexible context-dependent mechanisms for establishing trust between the communicating principals, bridging the gap between human and intrinsically secure identifiers. This project includes user experiments to evaluate and refine the interface between the network and users, and studies that analyze the relationship between technical design decisions, and economic incentives and public policy.
     264
     265A prototype XIA router has already been implemented as a click module. In this demo we are using ProtoGENI infrastructure to test this prototype router.
     266
     267= XSP and LAMP =
     268
     269Participants: Ezra Kissel (University of Delaware); Matthew Jaffee and Martin Swany (University of Indiana)
     270
     271This demonstration will show the eXtensible Session Protocol (XSP) and a networking approach that we call SLaBS, for Session Layer Burst Switching. XSP is a session protocol that can signal the network, utilize network forwarding gateways, and alter the configuration of the network for the duration of the session. In this demo, bulk data transfers will create virtual circuits with Openflow, and will perform optimized transfers between SLaBS gateways. The demo will be monitored using the LAMP I&M system.