254 | | |
255 | | |
256 | | |
257 | | |
258 | | |
| 254 | ==== Symbiotic CAV Evolution: Software-Defined Infrastructure and Case Study in Public Safety ==== |
| 255 | |
| 256 | Connected and automated vehicles (CAVs) represent a paradigm shift in road-transportation and vehicle-centered experience. As a foundation for CAVs, vehicular wireless networking significantly impacts the feasibility, adoption, and performance of CAV applications. This inherent coupling between CAV networking and applications, the wide spectrum of CAV applications envisioned, and the continuous evolution of CAV networking and applications as well as the associated society impact require us to rethink our approaches to the research, development, and deployment of CAV networks and applications. Leveraging software-defined platforms and infrastructures for CAV networking and application deployment, we propose a promising approach that allows for symbiotic exploration and evolution of CAV applications and networks in shared real-world systems and environments. Using wireless-networked 3D vision for public safety as an example, we illustrate this approach using vehicular sensing, networking, and cloud computing resources in the national GENI (i.e., Global Environment for Network Innovations) infrastructure. |
| 257 | |
| 258 | in particular, vehicles with software-defined CAV innovation platforms are deployed for supporting 3D vision for campus safety surveillance, vehicular sensing, and vehicular control networking emulation at the same time. Fusing real-time streamed videos with 3D campus environment, 3D vision facilitates campus safety surveillance; internal and external vehicle sensing enables modeling and development of CAV control algorithms for driving safety, which is also a major concern in public safety emergency response; integrating real-world vehicles with cloud-based simulation, vehicular control networking emulation enables exploration of next-generation vehicular wireless network solutions, which in turn enables future-generation CAV applications such as safe, green CAVs and real-time collaborative 3D vision for public safety emergency response. |
| 259 | |
| 260 | Our work has been demonstrated to senior leadership in Washington, DC (e.g., leaders from the White House Office of Science and Technology Policy, NSF, NIST, and DHS) in March 2015, an a YouTube video about the demo can be found at https://www.youtube.com/watch?v=y_QxXA0MJzI. |
| 261 | |
| 262 | Participants: |
| 263 | * Yuehua Wang |
| 264 | * Hai Jin |
| 265 | * Chuan Li |
| 266 | * Hongwei Zhang, hongwei@wayne.edu, Wayne State University |
| 267 | * Jing Hua |
| 268 | |
| 269 | === Security === |
| 270 | |
| 271 | ==== Covert Storage Channel Analysis Using SDN and Experimentation on GENI ==== |
| 272 | |
| 273 | Covert channel communication is a method of sending information over a network without being noticed by normal users or network administrators and without alerting Intrusion Detection Systems (IDSs) or firewalls. Specifically, Covert Storage Channels (CSCs) use a common storage medium to send information secretly. Efficient detection of CSCs is critical to defeating data exfiltration and secret command and control networks. There are two major challenges in detecting and mitigating CSCs: (1) By its nature, it is difficult to differentiate CSC traffic from normal traffic. (2) Moreover, complex computer networks are generating big data that pose great challenges to perform CSC detection and mitigation timely and accurately. Even though there has been a wealth of work on detecting CSCs using stateful IDSs and traffic patterns, none of these works have employed the unique capabilities of Software Defined Networking (SDN). These capabilities rely on the separation of data from control plane to give a holistic picture on network flows. In this project we focus on a collaborative CSC detection and mitigation approach and its implementation. We employ SDN that offers a wealth of information about traffic flows, to improve both the efficiency and accuracy. We deploy monitors and correlators that dynamically coordinate with each other in CSC analysis. We study this approach for CSC analysis and mitigation on a realistic test bed, GENI (Global Environment for Network Innovations), and present preliminary results on the performance of this CSC detection approach. |
| 274 | |
| 275 | Participants: |
| 276 | * Yiyuan Hu |
| 277 | * Xiangyang Li |
| 278 | * Xenia Mountrouidou, mountrouidoup@wofford.edu, Wofford College |
| 279 | |
| 280 | ==== DDoS Flooding Detection and Containment Using SDN and Experimentation on GENI ==== |
| 281 | |
| 282 | Distributed Denial of Service (DDoS) attacks are well studied in terms of detection and mitigation. However, as the network complexity and network traffic data has dramatically increased, these attacks have gained the spotlight again. Several cases of advanced persistent threats or other complex attacks are initiated with a DDoS flooding attack. Thus, it is important to detect and mitigate DDoS attacks accurately and efficiently. The problem is to distinguish DDoS from regular traffic and discover the instigator of the attack. Today’s increasing network complexities and the large volumes of data make this a challenging problem. We employ Software Defined Networking (SDN) that can enable targeted forensic evidence collection of computer network attacks. We present an innovative approach that coordinates distributed network traffic monitors that conduct anomaly detection and attack correlators that perform deep packet inspection supported by Open Virtual Switches (OVS). Our collaborative detection and mitigation approach looks for network flooding attack signature constituents that possess different characteristics in the level of information abstraction. Therefore, this approach is able to not only quickly raise an alert against potential threats, but also follow it up with careful verification to reduce false alarms. We experiment with this SDN-supported collaborative approach to detect DDoS attacks on the Global Environment for Network Innovations (GENI), a realistic virtual testbed. The response times and detection accuracy have demonstrated our method’s effectiveness and scalability. |
| 283 | |
| 284 | Participants: |
| 285 | * Xenia Mountrouidou, mountrouidoup@wofford.edu, Wofford College |
| 286 | * Xiangyang Li |
| 287 | * Tommy Chin |
| 288 | * Kaiqi Xiong |
| 289 | |
| 290 | === Tools, Testbeds and Federation === |
| 291 | |
| 292 | ==== CloudFinder: A System for Processing Big Data Workloads on Volunteered Federated Clouds ==== |
| 293 | |
| 294 | The proliferation of private clouds that are often underutilized and the tremendous computational potential of these clouds |
| 295 | when combined has recently brought forth the idea of volunteer cloud computing (VCC), a computing model where cloud owners |
| 296 | contribute underutilized computing and/or storage resources on their clouds to support the execution of applications of other members |
| 297 | in the community. This model is particularly suitable to solve big data scientific problems. Scientists in data-intensive scientific fields |
| 298 | increasingly recognize that sharing volunteered resources from several clouds is a cost-effective alternative to solve many complex, |
| 299 | data- and/or compute-intensive science problems. Despite the promise of the idea of VCC, it still remains at the vision stage at best. |
| 300 | Challenges include the heterogeneity and autonomy of member clouds, access control and security, complex inter-cloud virtual |
| 301 | machine scheduling, etc. In this paper, we present CloudFinder, a system that supports the efficient execution of big data workloads on |
| 302 | volunteered federated clouds (VFCs). Our evaluation of the system indicates that VFCs are a promising cost-effective approach to |
| 303 | enable big data science. |
| 304 | |
| 305 | Participants: |
| 306 | * Abdelmounaam Rezgui, rezgui@cs.nmt.edu, New Mexico Institute of Technology |
| 307 | * Nickolas Davis |
| 308 | |
| 309 | ==== Deploying Private ExoGENI Racks on CloudLab ==== |
| 310 | |
| 311 | The new mid-scale infrastructures deployed by NSFCloud are now available for use by the GENI community. These infrastructures are intended to be used for research into the future of cloud computing. This demonstration shows work toward enabling the deployment of private ExoGENI racks on CloudLab infrastructure. Private ExoGENI racks can be used as both a development space for the ExoGENI team to design and deploy new features at scale, as well as by GENI users who wish to perform controlled experiments on their own isolated ExoGENI rack. |
| 312 | |
| 313 | The demonstration will include creating a CloudLab slice containing a functional ExoGENI rack, submitting a slice request to the ExoGENI rack using its native API. The ExoGENI slice will then be used to run the ADCIRC Storm Surge modeling workflow. |
| 314 | |
| 315 | Currently, private ExoGENI racks can be deployed on the APT, Clemson, and Wisconsin CloudLab clusters. We have successfully deployed private racks containing as many as 64 nodes (512 cores). In turn, these private ExoGENI racks have been used to create slices containing as many as 512 VMs. In addition, these ExoGENI VMs can utilize advanced hardware capabilities, such as the SR-IOV enabled Infiniband network fabric available at the APT site. For example, an ExoGENI slice composed of 64 VMs (512 cores) has been used to run the ADCIRC storm surge model at near-native performance; fast enough to contribute to an actual urgent hurricane simulation. |
| 316 | |
| 317 | Continued work will expand the capacities of private ExoGENI racks enabling them to connect to dynamic circuit providers. This capability will enable us to incorporate private ExoGENI racks residing in CloudLab into the ExoGENI federation, as well as enabling the deployment private ExoGENI-based federations composed of multiple private racks residing across many CloudLab sites. |
| 318 | |
| 319 | Participants: |
| 320 | * Paul Ruth, pruth@renci.org, RENCI |
| 321 | |
| 322 | ==== FireAnt: An Autonomous and Distributed Framework for Scaling Heterogeneous Cloud Infrastructure ==== |
| 323 | |
| 324 | In a large-scale data center or multiple cloud clusters, control and management is a non-trival task. Control and management includes resource discovery, reservation, monitoring, maintenance, teardown, and etc. Centralized control of federation among different aggregate managers is a popular method recently, for example GENI deployment. However, such mechanism requires additional external infrastructure. This architecture is not able to scale infinitely due to the computing and access limitations of the control infrastructure. Furthermore, cloud infrastructure, e.g. OpenStack, itself does not address and solve this scalability issue when controlling thousands of nodes in a data center. Hence, FireAnt is proposed to solve this scalability issue, enabling extending from ten nodes to a million nodes. |
| 325 | |
| 326 | In this demo, each cluster in the grid is an OpenStack cluster with FireAnt. A cluster without any free resources initiates a request and FireAnt sends it out for resource discovery. A remote cluster with sufficiently available resources accommodates this request and the local FireAnt calls OpenStack APIs to allocate resources locally (e.g. a VM). A reply is sent back to the origin cluster and VMs between these two clusters are stitched using VXLAN automatically by FireAnt. To end users, this process happens seamlessly and can be managed only in the origin cluster. |
| 327 | |
| 328 | Participants: |
| 329 | * Ke Xu, ke_x@dell.com, Dell |
| 330 | * Rajesh Narayanan |
| 331 | |
| 332 | ==== Hybrid Cloud technologies for HPC queuing systems experiments for 'Simulation-as-a-Service' ==== |
| 333 | |
| 334 | Advanced manufacturing today requires diverse computation infrastructure for data processing. Our 'Simulation-as-a-Service' App, currently compute jobs over OSU HPC resources. However, there is a need to access to different computation resources available. We provide to users access to a variety of clouds such as Amazon and CloudLab as HPC compute-clusters through the use of HPC queuing systems. The cloud infrastructure is deployed based on user requirements that are abstracted from a web site and converted to RSpecs and stored in Catalogs for future utilization whenever similar requirements are needed. |
| 335 | |
| 336 | Participants: |
| 337 | * Prasad Calyam |
| 338 | * Ronny Bazan Antequera, rcb553@mail.missouri.edu, University of Missouri |
| 339 | * Ray Leto |
| 340 | |
| 341 | ==== The GENI Experiment Engine and the Ignite Visualizer ==== |
| 342 | |
| 343 | We demonstrate the GENI Experiment Engine, one-click deployment of applications and services across the GENI infrastructure, and the use of this to deploy the Ignite Distributed Collaborative Visualization Engine, a collaborative visualization system for big data on any device, with instantaneous response to user requests |
| 344 | |
| 345 | Participants: |
| 346 | * Rick McGeer, rick@mcgeer.com |
| 347 | * Andy Bavier |
| 348 | * Glenn Ricart |
| 349 | * Matt Hemmings |
| 350 | * Robert Krahn |
| 351 | * Dan Ingalls |
| 352 | * David Lary |
| 353 | |
| 354 | |
| 355 | |