18 | | The most contentious service discussed was a slice tracker or broker in the nomenclature of ORCA/BEN. The main idea here is that there would be some global service at the clearinghouse that would track resource allocations to every registered (i.e., bound to a project) slice. As long as the responses back regarding resource allocations to the CH are asynchronous, after-the-fact and non-blocking, people didn't care too much. However, no one could really come up with a good use case to argue for the need. Three types of global policies were discussed, but none of these seem to need a centralized slice tracker at the clearinghouse. In the first case, we could maybe imagine the NSF making some sort of global policy based upon the attributes of a slice owner, (e.g., students can create slices with at most X hosts). As long as it isn't a limit on how many slices an actor could create but just about a single slice, this could be verified and enforced at the SAs were slices are created. Agreements to enforce these common policies would just need to be made, and the CH could delegate their enforcement. Another type of policy might be whether a slice has too many key resources or is not acting like the size of slice it registered as. This could be handled in the same way as above, by the SAs in a distributed manner. A third type might be about all GENI slices in total, but still only regarding a particular aggregate. For example, an agreement with FIRE might not allow the sum of all GENI slices to use more than a certain percent of their transatlantic link. Of course, this can just be handled as a local component or aggregate manager policy. The only types of things we could not check without involving an omniscient, centralized slice tracker are policies about usage of all GENI slices in aggregate across multiple AMs. So we could not check a general policy like FIRE slices can only use X number of hosts in GENI. However, it is not clear why such a policy would ever be needed as it really up to the aggregates operators themselves. It was thought that maybe some key resources like backbone connectivity and GENI racks might both need to be considered (e.g., a policy that FIRE users get at most Z resources *total* from either Internet2 or GENI Racks), and that spurred a discussion of whether aggregate authorities might have multiple aggregates and how they could enforce a global policy across multiple AMs. Regardless, a decision needs to be made whether or not such a slice tracker is required at the CH. If so, there should be clear use cases and requirements, and it should be implemented in a non-blocking way. As it stands, there is no compelling case for what appears to be a lot of work tom implement. |
| 18 | The most contentious service discussed was a slice tracker or broker in the nomenclature of ORCA/BEN. The main idea here is that there would be some global service at the clearinghouse that would track resource allocations to every registered (i.e., bound to a project) slice. As long as the responses back regarding resource allocations to the CH are asynchronous, after-the-fact and non-blocking, people didn't care too much. However, no one could really come up with a good use case to argue for the need. Three types of global policies were discussed, but none of these seem to need a centralized slice tracker at the clearinghouse. In the first case, we could maybe imagine the NSF making some sort of global policy based upon the attributes of a slice owner, (e.g., students can create slices with at most X hosts). As long as it isn't a limit on how many slices an actor could create but just about a single slice, this could be verified and enforced at the SAs were slices are created. Agreements to enforce these common policies would just need to be made, and the CH could delegate their enforcement. Another type of policy might be whether a slice has too many key resources or is not acting like the size of slice it registered as. This could be handled in the same way as above, by the SAs in a distributed manner. A third type might be about all GENI slices in total, but still only regarding a particular aggregate. For example, an agreement with FIRE might not allow the sum of all GENI slices to use more than a certain percent of their transatlantic link. Of course, this can just be handled as a local component or aggregate manager policy. The only types of things we could not check without involving an omniscient, centralized slice tracker are policies about usage of all GENI slices in aggregate across multiple AMs. So we could not check a general policy like FIRE slices can only use X number of hosts in GENI. However, it is not clear why such a policy would ever be needed as it really up to the aggregates operators themselves. It was thought that maybe some key resources like backbone connectivity and GENI racks might both need to be considered (e.g., a policy that FIRE users get at most Z resources *total* from either Internet2 or GENI Racks), and that spurred a discussion of whether aggregate authorities might have multiple aggregates and how they could enforce a global policy across multiple AMs. Regardless, a decision needs to be made whether or not such a slice tracker is required at the CH. If so, there should be clear use cases and requirements, and it should be implemented in a non-blocking way. As it stands, there is no compelling case for what appears to be a lot of work to implement. |