Changes between Initial Version and Version 1 of GENICloud/FinalReport

04/20/17 16:39:11 (7 years ago)
Vic Thomas



  • GENICloud/FinalReport

    v1 v1  
     2= Final Report =
     3    '''Prepared by Rick McGeer'''
     6We migrated GENICloud to an !OpenStack-based management framework, using KVM and LXC as, respectively, the virtual machine and container environment, replacing our previous PlanetLab-based installation.  The goal of this is threefold:
     7 1. To demonstrate that the SFA can be implemented on top of !OpenStack, and to develop a reference implementation of SFA on  !OpenStack that we will be able to contribute back to the !OpenStack  community.     Eventually, this could lead to a very large number of SFA-compliant Clouds throughout the US and the world.
     8 2. To leverage the tools and infrastructure developed by the Open Stack community for GENI researchers; in particular, the integration of the Quantum programmable network infrastructure into the SFA GENI stack;
     9 3. Begin the migration of the PlanetLab operational infrastructure to an !OpenStack basis.    PlanetLab is a  10-year-old infrastructure whose functionality has since been partially duplicated by popular Cloud management stacks such as !OpenStack and Eucalyptus, and by virtualization technologies such as LXC and KVM.  The current PlanetLab implementation -- MyPLC as the cluster control and Vservers on the nodes -- involves tens of thousands of lines of code on the controller and a 35,000 line kernel patch, which must be maintained by a small team of PlanetLab developers.  Moving to standards will offer a more sustainable future for PlanetLab.
     11A further motivation for the GENICloud effort was to federate cloud systems into
     12the distributed and experimental infrastructures provided by GENI.     Current GENI platforms largely are
     13not designed to scale rapidly on demand.  Under the federation of a standard Cloud platform and GENI, a more comprehensive platform is available to users; for example, development, computation, data generation can be done within the cloud, and deployment of the applications and services can be done on distributed platforms (e.g., PlanetLab). By taking advantage of cloud computing, GENI users  can dynamically scale their services on GENI depending on the demand of their services, and benefit from other services and uses of the cloud. Take a service that analyzes traffic data as an example; the service can deploy traffic collectors to collect Internet traffic data on PlanetLab . The traffic collected can be stored and processed in the cloud.   
     15PlanetLab  has high global penetration. However, it was not designed for either scalable computation nor large data services. Some services and experiments require a huge amount of data or they need to persist a large amount of instrumental data; also, it lacks the computation power for CPU intensive services or experiments. GENICloud fills in the gap by federating heterogeneous resources, in this case, a cloud platform with PlanetLab.
     17Most of the implementation effort of GENICloud concentrated on implementing the aggregate manager on top of the existing Cloud Controller, under both Eucalyptus and !OpenStack. A key design choice was whether to
     18modify the existing Controller, or use an overlying controller which implemented SFA API calls by calling
     19through to the underlying controller.  We chose the latter.  In
     20our  architecture, both the fact that the user is coming through an SFA layer and the identity of   the underlying
     21controller is hidden.   The SFA aggregate manager acts an mediator between user requests and an underlying cloud. The primary advantage is that we need not maintain modifications and patches in existing
     22Cloud managers, updating them as new modifications come out; rather, we only need up date our Aggregate
     23Manager as the interface to the Cloud manager changes.   We have not yet implemented, but do not exclude,
     24Cloud controller specific optimizations for controllers with specific optimizations such as Tashi}\.  A secondary advantage is that the identity of the underlying
     25Cloud controller is hidden from the user, meaning that user-facing tools and scripts don't change when the
     26underlying Cloud controller is hidden.
     28We built an initial implementation of the SFA on !OpenStack, using LXC as the Virtualization Layer, and Nova (the !OpenStack compute manager) as the node manager.  MyPLC is retained as the front end to the cluster, and slices and slivers can be created using the standard PlanetLab front end.  PlanetLab user id's, passwords, and keys are used, so that the GENICloud resource is available to PlanetLab users who agree to the !OpenCirrus tems and conditions.
     30We have built a command-line tool,, which allocates VMs through the sfi API.
     32We then shifted attention to the experimental use of containers as a unit of tenancy rather than VMs, which mirrors the PlanetLab model and offers greater efficiency in the use of hardware resources.  This was accomplished by reserving some nodes as containers and some for VMs, and using nova’s options for VMs and container allocation.
     34We further experimented with implementing the Attribute-based Access Control mechanism on PlanetLab.  This was a test to determine if an existing architecture could be modified to use this new proposed GENI standard mechanism  Further, policy checks are hardcoded at various places in the SFA. and ABAC pulls them together into one spot.
     36We built a cohesive policy that can be supported by all Aggregates. Custom relationships can be constructed as needed per domain. The subset of policies remains standardized.  This standardizes policy interfaces, and there is no need to have a common set of policies and roles across the federation.    Custom policies and relationships can be easily built on top of the existing common set of policies.
     38ABAC is a clean framework for delegating rights across the federation, including auditing of the delegation chain.   ABAC mechanisms in the policy have been integrated that not only support internal delegation, but external as well. For example, delegating rights to a GENI entity can be achieved using a single attribute.  Each aggregate can expose its own policies, see the policies of other aggregates, and specify relationships between them.
     40This enhances the  SFA’s secentralized structure.  ABAC does not require a credential database to authorize access which integrates perfectly into a decentralized framework.
     42In our implementation, policy checks happen at each Manager. Each of PlanetLab’s Components has a unique policy that governs their API. Delegating policy control to each component gives greater decentralized control. For example, a Registry Manager is not allowed to initiate a call to the Aggregate Manager, but the Aggregate Manager is allowed to call the Registry Manager. These restrictions are outlined in the existing policy tables and ABAC policy tables.
     43We  have also begun work integrating sfatables into ABAC policy driven framework. Specifically, when you deploy an !AggregateManager you can also set an IP range attribute that controls who has access. We demonstrate that ABAC can be retrofitted into an existing system and describe a path for incremental deployment. 
     45We also used GENICloud in CS462 at UVic in 2012, as described elsewhere.
     47== GENICloud Status ==
     48The GENICloud aggregate is no longer available for experimenters.