Changes between Version 5 and Version 6 of QinqResults


Ignore:
Timestamp:
06/28/10 14:12:26 (14 years ago)
Author:
jwilliams@bbn.com
Comment:

sanitized report.

Legend:

Unmodified
Added
Removed
Modified
  • QinqResults

    v5 v6  
    134134Verify frames originating from the test host are tagged as appropriate using the Wireshark host.
    135135
    136  * HP:   cvlans and svlans are used to distinguish port type. svlan trunk (QinQ) ports use the 0x88a8 value.
     136 * HP: cvlans and svlans are used to distinguish port type. svlan trunk (QinQ) ports use the 0x88a8 value.
    137137 * NEC: configures a QinQ trunk-port explicitly with the setting "switch dot1q ethertype 8a88" for a given port.
    138138 * CISCO: The access port for the QinQ portion needs configured for QinQ, the QinQ trunk (ES) port is configured with 0x88a8.
     
    385385[[Image(source:/trunk/wikifiles/QinqResults/NEC_VLAN128NoQinQ.jpg)]]
    386386
    387 VLAN 128 is capable of being sent out port 1, however it's tagged type is "0x8a88" (service VLAN), "not 0x8100" (customer VLAN). This implies that the Switch on the other side  of the trunk must be a service VLAN; sending the VLAN as a "normal" VLAN isn't possible in this configuration.
     387VLAN 128 is capable of being sent out port 1, however it's tagged type is "0x8a88" (service VLAN), "not 0x8100" (customer VLAN). This implies that the Switch on the other side  of the trunk must be a service VLAN; sending the VLAN as a "normal" VLAN isn't possible in this configuration. This implies that A port is either dedicated for normal dot1q or qinq - but can't do both.
    388388
    389389
     
    392392 667 VLAN0667         Up      0/1,0/6,0/13-15
    393393}}}
    394 From VLAN 667's port participation it's appears that there's no distinction between customer and service VLANs despite various ports being configured to tag for QinQ vs normal tagging. If all the ports are indeed in the same VLAN the VLAN trunk (aka "jumper) from ports13<->14 should create a broadcast storm which would be observable on port 1 (STP is disabled).
     394From VLAN 667's port participation it's appears that there's no distinction between customer and service VLANs despite various ports being configured to tag for QinQ vs normal tagging. If all the ports are indeed in the same VLAN the VLAN trunk (aka "jumper") from ports13<->14 should create a broadcast storm which would be observable on port 1 (STP is disabled).
    395395
    396396{{{
     
    401401
    402402[[Image(source:/trunk/wikifiles/QinqResults/NEC_SameIds_broadcast_storm.jpg)]]
    403 A broadcast storm was induced from a single ping packet. The image also shows the continual nesting of VLAN headers as the frame continues to loop between access and trunk ports. From this it doesn't look possible to tunnel the same customer and service VLAN using one switch. This is an artifact caused by trying to emulate, on a single physical switch, service and customer VLANS of the with the same VLAN ID bridged with an Ethernet cable.
     403
     404Indeed, a broadcast storm was induced from a single ping packet. The image also shows the continual nesting of VLAN headers as the frame continues to loop between access and trunk ports. From this it doesn't look possible to tunnel the same customer and service VLAN using one switch. This is an artifact caused by trying to emulate, on a single physical switch, service and customer VLANs with the same VLAN ID bridged with an Ethernet cable.
    404405
    405406----
     
    566567VLAN type mismatch. VID 667 is of type 'svlan'.
    567568}}}
    568 This was not as expected; HP's distinction between customer and service VLANs seems to imply (besides the implicit type tagging on trunk ports) that this configuration would be possible. Though surprising, this behavior is consistent with the NEC.
    569 
    570569
    571570----
     
    573572== Cisco ==
    574573=== Overview ===
    575 The cisco 3750 requires the SFP ES module for QinQ operation. installed as well as the appropriate licensing. See http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps5532/prod_qas09186a00801eb822.html  Note that all configuration in this section refers to "port 1" This refers to this ES port (!GigabitEthernet?1/1/1 ). Ticket #533 outlines the procedure for getting the Cisco configured properly.
    576 
    577 
     574The Cisco 3750 requires the SFP ES module for QinQ operation. installed as well as the appropriate licensing. See http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps5532/prod_qas09186a00801eb822.html  Note that all configuration in this section refers to "port 1" This refers to this ES port (!GigabitEthernet?1/1/1 ).
    578575
    579576=== Configuration ===
    580  
    581577 '''Trunk Negotiation'''::
    582578To allow for QinQ the Cisco Discovery protocol (CDP) should be disabled per dot1q (normal) VLAN tunk port.
     
    602598
    603599 ''' QinQ access ports'''::
    604 The QinQ Access ports (which connect to the standard dotq trunk ports) are configured to by using `switchport mode dot1q-tunnel.
     600The QinQ Access ports (which connect to the standard dotq trunk ports) are configured to by using `switchport mode dot1q-tunnel`.
    605601{{{
    606602 interface FastEthernet1/0/6
     
    611607
    612608 ''' QinQ Trunk ports'''::
    613 The QinQ trunk ports are set to participate in the same VLANs as the QinQ access ports. The trunk ethertype is set to the QinQ type: 0x88A8
     609The QinQ trunk ports are set to participate in the same VLANs as the QinQ access ports. The trunk ethertype is set to the QinQ type: 0x88A8.
    614610{{{
    615611 interface GigabitEthernet1/1/1
     
    622618
    623619=== QinQ Tagging ===
    624 Wireshark sees QinQ double tagged frame 667:2702 (e.g. 667 is the outer vlan, 3702 is the "wrapped" vlan). Wireshark reports the correct QinQ frame header type
     620Wireshark sees QinQ double tagged frame 667:2702 (e.g. 667 is the outer vlan, 3702 is the "wrapped" vlan). Wireshark reports the correct QinQ frame header type.
    625621{{{
    626622Ethernet frame
     
    641637
    642638=== Same Inner and Outer VLAN Tags ===
    643 Simular to NEC configuration ports with identical inner and outer VLAN IDs, when connected together via a jumer, cause a broadcast storm.
     639Simular to NEC configuration ports with identical inner and outer VLAN IDs, when connected together via a jumper, cause a broadcast storm.
    644640
    645641----
     
    667663
    668664'''NOTE'''
    669 Poblano's current configuration for QinQ , due to previous experimental configuration, has a QinQ MTU of 1508 and  a VLAN Trunk MTU of 1504 - This configuration is different than 802.3ac, but may be correct for this vendor.  Habanero doesn't currently have any explicit MTU settings.   Setting Poblano's QinQ MTU to 1504 and the VLAN trunk MTU to 1500 results in no "Echo" nor "Destination Unreachable (Fragmentation required, and DF flag set)" responses. This seems to imply that the ICMP Echo Request successfully made it to the destination host, but the ICMP Echo Response was dropped when leaving  the HP's QinQ port.  More investigation is required.
     665Poblano's configuration in this report shows a QinQ MTU of 1508 anddLAN Trunk MTU of 1504; this was due to an earlier configuration. Setting only the QinQ trunk to 1504 and leaving the dot-1q trunks as 1500 is the expected configuration.
    670666
    671667See the NEC IP8800 Manual: Configuration Settings, Vol. 3, section 1.4.3 for more information.
     
    701697
    702698'''TCP'''[[BR]]
    703 Naboo's VM hosts were capping out at ~430 Mbps for TCP traffic (+/- 7Mbps based on quick scanning of my iperf log files per 10sec over 10 minutes) This is a limitation of Naboo (VM server) and is not a limitation of any DUTs. This was with only 1 pair communicating - full 1Gb capacity was available. Testing both pairs over QinQ still resulted in transmission of ~430Mbps per pair (logged every minute over 8 hours). I noticed no downward performance trend - but again I am currently eyeballing. With two end-to-end pairs, we're still under the max capacity of the link.
     699Naboo's VM hosts were capping out at ~430 Mbps for TCP traffic (+/- 7Mbps based on quick scanning of my iperf log files per 10sec over 10 minutes). This is a limitation of Naboo (VM server) and is not a limitation of any DUTs. This was with only 1 pair communicating - full 1Gb capacity was available. Testing both pairs over QinQ still resulted in transmission of ~430Mbps per pair (logged every minute over 8 hours). I noticed no downward performance trend - but again I am currently eyeballing. With two end-to-end pairs, we're still under the max capacity of the link.
    704700
    705701{{{
     
    966962=== Useful Commands ===
    967963The following are some notes taken while learning the NEC switch syntax:
    968 See ["hwNecIP8800"] for more discussion.
    969 
    970964'''Getting started'''
    971965 * login: operator
     
    1004998
    1005999=== Current Configuration ===
    1006 '''NOTE''' the actual password for the HP has been replaced with XXXXX for display. If you intend on using this output as a configuration you must replace XXXXX with the appropriate password.
     1000'''NOTE''' The actual password for the HP has been replaced with XXXXX for display. If you intend on using this output as a configuration you must replace XXXXX with the appropriate password.
    10071001{{{
    10081002habanero# show running-config