Custom Query (138 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (10 - 12 of 138)

1 2 3 4 5 6 7 8 9 10 11 12 13 14
Ticket Resolution Summary Owner Reporter
#178 fixed Missing FIU compute resource aggregate at GMOC monitoring somebody lnevers@bbn.com
Description

Running monitoring test case EG-CT-5 for New Site Confirmation Testing and found that there is no compute resource aggregate for the FIU ExoGENI rack.

https://gmoc-db.grnoc.iu.edu/protected-openid/index.pl?method=aggregates&search=fiu

There is a FOAM aggregate for FIU.

#177 fixed Set up DNS records for fiu-ssg.exogeni.net somebody tupty@bbn.com
Description

I tried setting up a service check for connectivity to fiu-ssg.exogeni.net, but DNS lookups for that name are failing. Currently, we ping the public-facing interfaces of the SSGs and the head nodes for the BBN (bbn-ssg.exogeni.gpolab.bbn.com) and RENCI (rci-ssg.exogeni.net) ExoGENI racks by their host names, so I think that this SSG's public interface should have been put into DNS as well.

#176 fixed Poor UDP performance in ExoGENI racks somebody lnevers@bbn.com
Description

In a scenario where VMs are requested within one rack without specifying the link "capacity", the link is allocated as best effort. It was found in testing that the VMs requested via the ExoSM consistently had half the throughput of the VMs requested via the Local SM when no capacity is specified. Here are some throughput measured in the GPO and RENCI racks:

==> Measurements collected in the RENCI rack - VM m1.small to VM m1.small

Results for sliver requested from ExoSM:
    1 TCP client: 1.52 Gbits/sec
    5 TCP clients: 1.84 Gbits/sec
    10 TCP clients: 2.10 Gbits/sec
    1 UDP client: Failed (requested 10 Mbits/sec, iperf server shows around  4 Mbits/sec)

Results for sliver requested from RENCI SM:
    1 TCP client: 3.49 Gbits/sec
    5 TCP clients: 4.89 Gbits/sec
    10 TCP clients:  4.88 Gbits/sec
    1 UDP client: 10.0 Mbits/sec; 101 Mbits/sec; 1.04 Gbits/sec; 

==> Measurements collected in GPO rack - VM m1.small to VM m1.small

Results for sliver requested from ExoSM:
    1 TCP client: 2.77 Gbits/sec
    5 TCP clients: 5.28 Gbits/sec
    10 TCP clients: 5.39 Gbits/sec
    1 UDP client: 3.87 Mbits/sec  (requested 10 Mbits/sec)

Results for sliver requested from GPO SM:
    1 TCP client: 6.92 Gbits/sec
    5 TCP clients: 9.39 Gbits/sec
   10 TCP clients: 9.39 Gbits/sec
    1 UDP client: Failed (requested 10 Mbits/sec, iperf server shows around  4 Mbits/sec)

To avoid the lower performance seen in the VMs allocated by the ExoSM, experimenters can specify a capacity for the link connecting the VMs within the rack. It was verified that setting the link to capacity to 10Gbs resulted in throughput around 5 Gbps.

1 2 3 4 5 6 7 8 9 10 11 12 13 14
Note: See TracQuery for help on using queries.