Changes between Version 5 and Version 6 of QinqResults
- Timestamp:
- 06/28/10 14:12:26 (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
QinqResults
v5 v6 134 134 Verify frames originating from the test host are tagged as appropriate using the Wireshark host. 135 135 136 * HP: cvlans and svlansare used to distinguish port type. svlan trunk (QinQ) ports use the 0x88a8 value.136 * HP: cvlans and svlans are used to distinguish port type. svlan trunk (QinQ) ports use the 0x88a8 value. 137 137 * NEC: configures a QinQ trunk-port explicitly with the setting "switch dot1q ethertype 8a88" for a given port. 138 138 * CISCO: The access port for the QinQ portion needs configured for QinQ, the QinQ trunk (ES) port is configured with 0x88a8. … … 385 385 [[Image(source:/trunk/wikifiles/QinqResults/NEC_VLAN128NoQinQ.jpg)]] 386 386 387 VLAN 128 is capable of being sent out port 1, however it's tagged type is "0x8a88" (service VLAN), "not 0x8100" (customer VLAN). This implies that the Switch on the other side of the trunk must be a service VLAN; sending the VLAN as a "normal" VLAN isn't possible in this configuration. 387 VLAN 128 is capable of being sent out port 1, however it's tagged type is "0x8a88" (service VLAN), "not 0x8100" (customer VLAN). This implies that the Switch on the other side of the trunk must be a service VLAN; sending the VLAN as a "normal" VLAN isn't possible in this configuration. This implies that A port is either dedicated for normal dot1q or qinq - but can't do both. 388 388 389 389 … … 392 392 667 VLAN0667 Up 0/1,0/6,0/13-15 393 393 }}} 394 From VLAN 667's port participation it's appears that there's no distinction between customer and service VLANs despite various ports being configured to tag for QinQ vs normal tagging. If all the ports are indeed in the same VLAN the VLAN trunk (aka "jumper ) from ports13<->14 should create a broadcast storm which would be observable on port 1 (STP is disabled).394 From VLAN 667's port participation it's appears that there's no distinction between customer and service VLANs despite various ports being configured to tag for QinQ vs normal tagging. If all the ports are indeed in the same VLAN the VLAN trunk (aka "jumper") from ports13<->14 should create a broadcast storm which would be observable on port 1 (STP is disabled). 395 395 396 396 {{{ … … 401 401 402 402 [[Image(source:/trunk/wikifiles/QinqResults/NEC_SameIds_broadcast_storm.jpg)]] 403 A broadcast storm was induced from a single ping packet. The image also shows the continual nesting of VLAN headers as the frame continues to loop between access and trunk ports. From this it doesn't look possible to tunnel the same customer and service VLAN using one switch. This is an artifact caused by trying to emulate, on a single physical switch, service and customer VLANS of the with the same VLAN ID bridged with an Ethernet cable. 403 404 Indeed, a broadcast storm was induced from a single ping packet. The image also shows the continual nesting of VLAN headers as the frame continues to loop between access and trunk ports. From this it doesn't look possible to tunnel the same customer and service VLAN using one switch. This is an artifact caused by trying to emulate, on a single physical switch, service and customer VLANs with the same VLAN ID bridged with an Ethernet cable. 404 405 405 406 ---- … … 566 567 VLAN type mismatch. VID 667 is of type 'svlan'. 567 568 }}} 568 This was not as expected; HP's distinction between customer and service VLANs seems to imply (besides the implicit type tagging on trunk ports) that this configuration would be possible. Though surprising, this behavior is consistent with the NEC.569 570 569 571 570 ---- … … 573 572 == Cisco == 574 573 === Overview === 575 The cisco 3750 requires the SFP ES module for QinQ operation. installed as well as the appropriate licensing. See http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps5532/prod_qas09186a00801eb822.html Note that all configuration in this section refers to "port 1" This refers to this ES port (!GigabitEthernet?1/1/1 ). Ticket #533 outlines the procedure for getting the Cisco configured properly. 576 577 574 The Cisco 3750 requires the SFP ES module for QinQ operation. installed as well as the appropriate licensing. See http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps5532/prod_qas09186a00801eb822.html Note that all configuration in this section refers to "port 1" This refers to this ES port (!GigabitEthernet?1/1/1 ). 578 575 579 576 === Configuration === 580 581 577 '''Trunk Negotiation''':: 582 578 To allow for QinQ the Cisco Discovery protocol (CDP) should be disabled per dot1q (normal) VLAN tunk port. … … 602 598 603 599 ''' QinQ access ports''':: 604 The QinQ Access ports (which connect to the standard dotq trunk ports) are configured to by using `switchport mode dot1q-tunnel .600 The QinQ Access ports (which connect to the standard dotq trunk ports) are configured to by using `switchport mode dot1q-tunnel`. 605 601 {{{ 606 602 interface FastEthernet1/0/6 … … 611 607 612 608 ''' QinQ Trunk ports''':: 613 The QinQ trunk ports are set to participate in the same VLANs as the QinQ access ports. The trunk ethertype is set to the QinQ type: 0x88A8 609 The QinQ trunk ports are set to participate in the same VLANs as the QinQ access ports. The trunk ethertype is set to the QinQ type: 0x88A8. 614 610 {{{ 615 611 interface GigabitEthernet1/1/1 … … 622 618 623 619 === QinQ Tagging === 624 Wireshark sees QinQ double tagged frame 667:2702 (e.g. 667 is the outer vlan, 3702 is the "wrapped" vlan). Wireshark reports the correct QinQ frame header type 620 Wireshark sees QinQ double tagged frame 667:2702 (e.g. 667 is the outer vlan, 3702 is the "wrapped" vlan). Wireshark reports the correct QinQ frame header type. 625 621 {{{ 626 622 Ethernet frame … … 641 637 642 638 === Same Inner and Outer VLAN Tags === 643 Simular to NEC configuration ports with identical inner and outer VLAN IDs, when connected together via a jum er, cause a broadcast storm.639 Simular to NEC configuration ports with identical inner and outer VLAN IDs, when connected together via a jumper, cause a broadcast storm. 644 640 645 641 ---- … … 667 663 668 664 '''NOTE''' 669 Poblano's c urrent configuration for QinQ , due to previous experimental configuration, has a QinQ MTU of 1508 and a VLAN Trunk MTU of 1504 - This configuration is different than 802.3ac, but may be correct for this vendor. Habanero doesn't currently have any explicit MTU settings. Setting Poblano's QinQ MTU to 1504 and the VLAN trunk MTU to 1500 results in no "Echo" nor "Destination Unreachable (Fragmentation required, and DF flag set)" responses. This seems to imply that the ICMP Echo Request successfully made it to the destination host, but the ICMP Echo Response was dropped when leaving the HP's QinQ port. More investigation is required.665 Poblano's configuration in this report shows a QinQ MTU of 1508 anddLAN Trunk MTU of 1504; this was due to an earlier configuration. Setting only the QinQ trunk to 1504 and leaving the dot-1q trunks as 1500 is the expected configuration. 670 666 671 667 See the NEC IP8800 Manual: Configuration Settings, Vol. 3, section 1.4.3 for more information. … … 701 697 702 698 '''TCP'''[[BR]] 703 Naboo's VM hosts were capping out at ~430 Mbps for TCP traffic (+/- 7Mbps based on quick scanning of my iperf log files per 10sec over 10 minutes) This is a limitation of Naboo (VM server) and is not a limitation of any DUTs. This was with only 1 pair communicating - full 1Gb capacity was available. Testing both pairs over QinQ still resulted in transmission of ~430Mbps per pair (logged every minute over 8 hours). I noticed no downward performance trend - but again I am currently eyeballing. With two end-to-end pairs, we're still under the max capacity of the link.699 Naboo's VM hosts were capping out at ~430 Mbps for TCP traffic (+/- 7Mbps based on quick scanning of my iperf log files per 10sec over 10 minutes). This is a limitation of Naboo (VM server) and is not a limitation of any DUTs. This was with only 1 pair communicating - full 1Gb capacity was available. Testing both pairs over QinQ still resulted in transmission of ~430Mbps per pair (logged every minute over 8 hours). I noticed no downward performance trend - but again I am currently eyeballing. With two end-to-end pairs, we're still under the max capacity of the link. 704 700 705 701 {{{ … … 966 962 === Useful Commands === 967 963 The following are some notes taken while learning the NEC switch syntax: 968 See ["hwNecIP8800"] for more discussion.969 970 964 '''Getting started''' 971 965 * login: operator … … 1004 998 1005 999 === Current Configuration === 1006 '''NOTE''' the actual password for the HP has been replaced with XXXXX for display. If you intend on using this output as a configuration you must replace XXXXX with the appropriate password.1000 '''NOTE''' The actual password for the HP has been replaced with XXXXX for display. If you intend on using this output as a configuration you must replace XXXXX with the appropriate password. 1007 1001 {{{ 1008 1002 habanero# show running-config