Changes between Version 8 and Version 9 of ClearinghousePanelSummary


Ignore:
Timestamp:
11/17/11 12:51:47 (12 years ago)
Author:
Aaron Helsinger
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • ClearinghousePanelSummary

    v8 v9  
    1616Most of the secondary services were not contentious as long as it was clear that the clearinghouse would not have exclusivity on those services. In particular, people were apathetic as to whether or not the the CH acted as an additional identity portal and slice authority. No one seemed to care whether or not the CH provided a resource discovery service as long as it was not "in the way", i.e. optionally used.
    1717
    18 The most contentious service discussed was a slice tracker or broker in the nomenclature of ORCA/BEN. The main idea here is that there would be some global service at the clearinghouse that would track resource allocations to every registered (i.e., bound to a project) slice. As long as the responses back regarding resource allocations to the CH are asynchronous, after-the-fact and non-blocking, people didn't care too much. However, no one could really come up with a good use case to argue for the need. Three types of global policies were discussed, but none of these seem to need a centralized slice tracker at the clearinghouse. In the first case, we could maybe imagine the NSF making some sort of global policy based upon the attributes of a slice owner, (e.g., students can create slices with at most X hosts). As long as it isn't a limit on how many slices an actor could create but just about a single slice, this could be verified and enforced at the SAs were slices are created. Agreements to enforce these common policies would just need to be made, and the CH could delegate their enforcement. Another type of policy might be whether a slice has too many key resources or is not acting like the size of slice it registered as. This could be handled in the same way as above, by the SAs in a distributed manner. A third type might be about all GENI slices in total, but still only regarding a particular aggregate. For example, an agreement with FIRE might not allow the sum of all GENI slices to use more than a certain percent of their transatlantic link. Of course, this can just be handled as a local component or aggregate manager policy. The only types of things we could not check without involving an omniscient, centralized slice tracker are policies about usage of all GENI slices in aggregate across multiple AMs. So we could not check a general policy like FIRE slices can only use X number of hosts in GENI. However, it is not clear why such a policy would ever be needed as it really up to the aggregates operators themselves. It was thought that maybe some key resources like backbone connectivity and GENI racks might both need to be considered (e.g., a policy that FIRE users get at most Z resources *total*  from either Internet2 or GENI Racks), and that spurred a discussion of whether aggregate authorities might have multiple aggregates and how they could enforce a global policy across multiple AMs. Regardless, a decision needs to be made whether or not such a slice tracker is required at the CH. If so, there should be clear use cases and requirements, and it should be implemented in a non-blocking way. As it stands, there is no compelling case for what appears to be a lot of work tom implement.
     18The most contentious service discussed was a slice tracker or broker in the nomenclature of ORCA/BEN. The main idea here is that there would be some global service at the clearinghouse that would track resource allocations to every registered (i.e., bound to a project) slice. As long as the responses back regarding resource allocations to the CH are asynchronous, after-the-fact and non-blocking, people didn't care too much. However, no one could really come up with a good use case to argue for the need. Three types of global policies were discussed, but none of these seem to need a centralized slice tracker at the clearinghouse. In the first case, we could maybe imagine the NSF making some sort of global policy based upon the attributes of a slice owner, (e.g., students can create slices with at most X hosts). As long as it isn't a limit on how many slices an actor could create but just about a single slice, this could be verified and enforced at the SAs were slices are created. Agreements to enforce these common policies would just need to be made, and the CH could delegate their enforcement. Another type of policy might be whether a slice has too many key resources or is not acting like the size of slice it registered as. This could be handled in the same way as above, by the SAs in a distributed manner. A third type might be about all GENI slices in total, but still only regarding a particular aggregate. For example, an agreement with FIRE might not allow the sum of all GENI slices to use more than a certain percent of their transatlantic link. Of course, this can just be handled as a local component or aggregate manager policy. The only types of things we could not check without involving an omniscient, centralized slice tracker are policies about usage of all GENI slices in aggregate across multiple AMs. So we could not check a general policy like FIRE slices can only use X number of hosts in GENI. However, it is not clear why such a policy would ever be needed as it really up to the aggregates operators themselves. It was thought that maybe some key resources like backbone connectivity and GENI racks might both need to be considered (e.g., a policy that FIRE users get at most Z resources *total*  from either Internet2 or GENI Racks), and that spurred a discussion of whether aggregate authorities might have multiple aggregates and how they could enforce a global policy across multiple AMs. Regardless, a decision needs to be made whether or not such a slice tracker is required at the CH. If so, there should be clear use cases and requirements, and it should be implemented in a non-blocking way. As it stands, there is no compelling case for what appears to be a lot of work to implement.
    1919
    2020= Project Size =
    2121One of the last things discussed was the concept of different project sizes, though we did not get into the QoS guarantees drafted for registering projects of different sizes. The point was brought up that most people don't know how many resources they need in the beginning, and hence they usually default to asking for everything they can. A penalty of a longer wait time may make them more conservative, but there would have to be a way to update the project size and description then.
    2222
    23 More to the point, it doesn't seem like it makes sense to codify these sizes at this stage, but only after we have operational experience with a CH. Therefore, Adam Slagell will remove that section from the clearinghouse document as well as make changes to allow for delegation of the project leader role. The more important thing seems to just have a mechanism in place for important, shared and contentious resources to be reserved with priority. Most often decisions should be made by local aggregate authorities, but special exceptions to reserve unusually large slices may be needed. For example, a conference or GEC demo may require a large reservation of GENI Racks or network backbone bandwidth which would normally violate local policy. The GOG could in that instance grant a special exemption for a limited time.
     23More to the point, it doesn't seem like it makes sense to codify these sizes at this stage, but only after we have operational experience with a CH. Therefore, Adam Slagell will remove that section from the clearinghouse document as well as make changes to allow for delegation of the project leader role. The more important thing seems to be to just have a mechanism in place for important, shared and contentious resources to be reserved with priority. Most often decisions should be made by local aggregate authorities, but special exceptions to reserve unusually large slices may be needed. For example, a conference or GEC demo may require a large reservation of GENI Racks or network backbone bandwidth which would normally violate local policy. The GOG could in that instance grant a special exemption for a limited time.
    2424
    2525= Future Topics =
    26 We did not get to discuss the information collected at project registration nor how to compose the GOG. We also did not discuss a federation charter. A discussion about the information collected at project registration is influenced by the discussion of not needing project sizes now, but it should be continued in more detail between now and the next GEC. Finally, the topic of CA policies was mentioned, but it was agreed that they were not yet needed when there are so credential issuing entities. At this point, everyone knows each other. However, more rigorous CA policies may need to be developed in the future.
     26We did not get to discuss the information collected at project registration nor how to compose the GOG. We also did not discuss a federation charter. A discussion about the information collected at project registration is influenced by the discussion of not needing project sizes now, but it should be continued in more detail between now and the next GEC. Finally, the topic of CA policies was mentioned, but it was agreed that they were not yet needed when there are so few credential issuing entities. At this point, everyone knows each other. However, more rigorous CA policies may need to be developed in the future.