Opened 11 years ago

Closed 10 years ago

#1086 closed (fixed)

Unable to exchange Iperf tcp data traffic between PG and IG nodes

Reported by: lnevers@bbn.com Owned by: duerig@flux.utah.edu
Priority: major Milestone:
Component: STITCHING Version: SPIRAL5
Keywords: Network Stitching Cc: duerig@flux.utah.edu, ricci@cs.utah.edu
Dependencies:

Description

Writing ticket to track issue discussed in email with Jonathon Duerig.

Not able to exchange iperf data between PG and IG endpoints for stitched slivers. These are the scenarios I have tried:

Sliver end-points Ping ssh (1) iperf (2)
Utah IG <-> Utah PG OK OK OK
GPO IG <-> Utah PG OK OK FAILED
GPO IG <-> Utah IG OK OK FAILED
Utah PG <-> Kentucky PG OK OK FAILED

(1) Ssh was used between the two VMs in the sliver to show TCP connectivity
(2) Iperf UDP and TCP were run.

In all failed cases, I can see with tcpdump that the the iperf client connects to the server and that the server sees the client connection, but the iperf data sent by the client is never seen at the iperf server.

I have been able to verify via the Internet2 Router Proxy that the iperf data traffic is handled by the VLANs in internet2 and is delivered end-to-end in ION. I have verified that the iperf data makes it out of the iperf client rack and through Internet2, but once it get to the destination rack (or PG) it does not make it to the host.

Not sure what I can do next? Is there any way the the commplex-link definitions are getting in the way of this traffic?

Change History (4)

comment:1 Changed 11 years ago by lnevers@bbn.com

Jonathon, it seems that this issue is due to the handling of the MTU size. If I change the changing the MTU for the iperf traffic to reduce its size, then the traffic to be delivered.

comment:2 Changed 11 years ago by lnevers@bbn.com

Updating with email exchange that occurred outside of this ticket:

On 7/16/13 11:53 AM, Luisa Nevers wrote:
Did anything happen with this?

I still cannot use the default MTU for Iperf.

Luisa

On 7/1/13 4:29 PM, Luisa Nevers wrote:
>
> When I tested this, I monitored the Internet2 Router Proxy and the packet counts
> seemed to show that the traffic was being delivered end-to-end in ION.
>
> The sliver has an expiration date of July 7th, I will delete it when you are done.
>
> Luisa
>
>
> On 7/1/13 4:08 PM, Robert Ricci wrote:
>> HP claims to support a large enough default MTU for the tag too. I would try going from the same VM to one of the pcpg-i2 nodes using a VLAN tag; if that works, it's probably ION or something on the BBN side (harder to test) limiting the MTU
>>
>> On Jul 1, 2013, at 2:03 PM, Leigh Stoller <lbstoller@gmail.com> wrote:
>>
>>>> Which nodes are these? I'll look at as much of the path I can to try to figure out if the fault is in one of our switches (like an HP)
>>> This is between pcvm1-1 on the IG rack and pcvm2-8 at the GPO rack.
>>> Vlan 988 on the Utah side and 3748 on the GPO side.
>>>
>>> Luisa, can you please renew the slice and slivers. Thanks!
>>>
>>> Lbs
>>
>

comment:3 Changed 10 years ago by lnevers@bbn.com

Cc: duerig@flux.utah.edu ricci@cs.utah.edu added

Still not able to use MTU of 1500 for any stitching with an end-point to Utah (IG, PG). What are the plans for this issue?

Additional exchange on 7/16/13 5:43 PM, Robert Ricci wrote:

Not yet. Is this critical for GEC17? If not, it might have to wait until after.

On Jul 16, 2013, at 9:53 AM, Luisa Nevers <lnevers@bbn.com> wrote:

Did anything happen with this?

I still cannot use the default MTU for Iperf.

comment:4 Changed 10 years ago by lnevers@bbn.com

Resolution: fixed
Status: newclosed

The MTU problem has been resolved as configuration has been updated for connection to "protogeni" to support jumbo frames.

Problem resolution was verified, closing ticket.

Note: See TracTickets for help on using tickets.