Changes between Initial Version and Version 1 of ProjTemp-Post-GEC10-status


Ignore:
Timestamp:
03/31/11 22:05:40 (13 years ago)
Author:
Don Kline
Comment:

Page created

Legend:

Unmodified
Added
Removed
Modified
  • ProjTemp-Post-GEC10-status

    v1 v1  
     1= VMI-FED Project Status Report =
     2
     3Period: November 18, 2010 – March 31, 2011 (Post GEC10)
     4
     5= Overview =
     6
     7This QSR documents the progress by the University of Alaska Fairbanks (UAF) for Project Number 1773, Virtual Machine Introspection (VMI) and Development of a Model Federation Framework (MFF) for GENI, a.k.a. VMI-FED. We successfully demonstrated VMI functionality on an instance of the ORCA control framework at the ninth GENI Engineering Conference (GEC9).
     8
     9== I. Major accomplishments ==
     10
     11=== A. Milestones achieved ===
     12
     13Gained support from the University of Alaska OIT staff, through public IP address space granting us the ability to create a distributed ORCA installation on the UA network.
     14
     15A eucalyptus server is installed and running here at UAF. This is a large part of the process to get our own ORCA instance up and running.
     16
     17=== B. Deliverables made ===
     18
     19(02/18/11) Selection of Alaska resources to be federated with GENI.
     20        Documented the federation options and described new capabilities these resources would bring to GENI.
     21
     22(03/16/11) VMI technology consistent with GENI I&M Architecture.
     23        Highlighted how VMI fits into the current version of the I&M system diagram.
     24        Instrumentation/measurement data from VMI now conforms to PerfSONAR schema for measurement data.
     25
     26(Due 03/18/11) Demonstration and outreach at GEC10
     27        Promoted the use of VMI technology on ORCA.[[BR]]
     28        Demonstrated VMI that included I&M/PerfSONAR compliant output (Actual demo moved to April 4, 2011).
     29
     30
     31== II. Description of work performed during last quarter ==
     32
     33=== A. Activities and findings ===
     34
     35We have selected resources in Alaska to become a distributed ORCA installation. These include server(s) at:
     36        The computer science department at UAF[[BR]]
     37        The UAF datacenter[[BR]]
     38        Bristol Bay campus in Dillingham, Alaska[[BR]]
     39        Ketchikan, Alaska campus[[BR]]
     40
     41Two of these locations have been changed at the request of the IT staff here at UA.  They preferred Barrow and Kotzebue to Ketchikan and Dillingham since these locations would provide a better environment for the initial deployments of experimental equipment i.e. it provided a more reliable phased federation. These locations will still provide high latency satellite links to experimenters, as stated in the resource selection document for out teams February 15th deliverable.
     42
     43In preparation for this distributed installation, we have reserved a set of public IP addresses in the two UAF locations. Also, we have set up Eucalyptus and are on the final stages of setting up ORCA in order to have a installation that we can easily modify that is close by.  Once we are happy with the installation, we will federate resources in Barrow and Kotzebue at each stage assuring that the devices are acting as expected, and are operating within UA’s requirements.
     44
     45A classification schema very similar to the one described in the Model Federation Framework documentation has undergone significant development. This schema is intended to store the high-level resource descriptions of the University of Alaska (UA) clearinghouse.
     46        This includes a data dictionary used for validating high-level resource descriptions saved into the UA clearinghouse.[[BR]]
     47        The data dictionary has a management and public interface mostly complete.[[BR]]
     48        The clearinghouse has a management interface mostly complete.
     49
     50=== B. Project participants ===
     51
     52
     53VMI Trac [[BR]]
     54[[BR]]
     55Dr. Brian Hay[[BR]]
     56Principal Investigator[[BR]]
     57brian.hay@alaska.edu[[BR]]
     58[[BR]]
     59Brandon Marken[[BR]]
     60PhD Student[[BR]]
     61bamarken@alaska.edu[[BR]]
     62[[BR]]
     63John Quan[[BR]]
     64Research Assistant Lead[[BR]]
     65jquan2@alaska.edu[[BR]]
     66[[BR]]
     67Bob Torgerson[[BR]]
     68Graduate Student[[BR]]
     69rltorgerson@alaska.edu[[BR]]
     70[[BR]]
     71MFF Trac[[BR]]
     72[[BR]]
     73Dr. Kara Nance[[BR]]
     74Principal Investigator[[BR]]
     75klnance@alaska.edu[[BR]]
     76[[BR]]
     77Dr. Jon Genetti [[BR]]
     78Principal Investigator[[BR]]
     79jdgenetti@alaska.edu [[BR]]
     80[[BR]]
     81Donald Kline[[BR]]
     82Research Assistant Lead[[BR]]
     83dpkline@alaska.edu[[BR]]
     84
     85Brian, Kara, and Bob attended GEC10.
     86
     87=== C. Publications (individual and organizational) ===
     88
     89Submitted a grant application to NSF requesting funding to use GENI as a testbed for complex systems.
     90
     91Dr. Kara Nance’s, Dr. Brian Hay’s, and John Quan’s paper Investigating Mutualistic Security Service Models for Large-Scale Virtualized Environments considers trading services for resources in large-scale networks, and was accepted to IEEE’s IT Professional magazine.  Inspiration for this paper was based on the need to promote real world uses for GENI, and to increase opt-in user federation and experimentation in GENI. The paper is in its final revisions with a publication date TBD.
     92
     93Don Kline and John Quan completed the paper Attribute Description Service for Large-Scale Networks, which is about standardizing resource classifications in large-scale networks.  Inspiration for this paper follows directly from the findings in the documentation posted in the final spiral 2 deliverable made on 9/30/10. This paper was accepted and will be published at the Human Computer interaction International conference in July 2011.
     94
     95=== D. Outreach activities ===
     96
     97Promoted GENI as a possible future testing and deployment environment for honeynets at the 2011 Honeynet Project Annual Workshop.
     98
     99=== E. Collaborations ===
     100
     101Installing the (Remotely Accessible Virtual Environment) RAVE infrastructure across the United States, there are now six sites up that can potentially host GENI nodes.
     102
     103Discussed the use of the RAVE infrastructure to expand OnTimeMeasure’s goals with Prasad Calyam.
     104
     105=== F. Other Contributions ===
     106
     107=== G. Goals ===
     108
     109To meet our upcoming deliverables we plan to have our distributed ORCA environment running, including each of the four locations mentioned under activities and findings. We will also provide documentation detailing how to use and access these resources. We also plan to have VMI functioning on this environment, and to start producing experimenter documentation for VMI tools in accordance with our 9/9/11 deadline.
     110
     111At GEC 11, we plan to present a data dictionary approach to resource descriptions in order to provide high-level descriptions to users. The goal being to show how a standardized terminology can make seamless user interfaces.
     112
     113
     114
     115
     116
     117