wiki:JoeSandbox/OpenFlowNATExample/Execute

Version 2 (modified by zwang@bbn.com, 10 years ago) (diff)

--

OpenFlow NAT Example

Hello GENI index Hello GENI index Hello GENI index

STEPS FOR EXECUTING EXAMPLE

In this section, we are going to build a router for a private address space that needs one-to-many NAT (IP Masquerade) for some reason (e.g. short on public IP or security) using OpenFlow. As shown in the figure below, hosts inside1 and inside2 are inside the LAN, while host outside is outside. The LAN has only one public IP — 128.128.129.1. (The external IPs we use, 128.128.128.0/24 and 128.128.129.0/24, are just an example. If your public IP happens to be in this subnet, change them to others.)

1 Test reachability before starting controller

1.1 Login to your hosts

To start our experiment we need to ssh all of our hosts. Depending on which tool and OS you are using there is a slightly different process for logging in. If you don't know how to SSH to your reserved hosts take a look in this page. Once you have logged in, follow the rest of the instructions.

1.2 Test reachability

  1. First we start a ping from inside1 to inside2, which should also work since they are inside the same LAN.
    inside1:~$ ping 192.168.0.3 -c 10
    
  1. Then we start a ping from outside to inside1, which should timeout as no according routing information in its routing table. You can use route -n to verify that.
    outside:~$ ping 192.168.0.2 -c 10
    
  1. Similarly, we cannot ping from insideX to outside.
  1. You can also use Netcat(nc) to test reachability of TCP and UDP. The behavior should be the same.

2 Start controller to enable NAT

2.1 Inside source

We definitely need a better naming for this one

You can try to write your own controller to implement NAT. However, we've provided you a functional controller, which is a file called nat.py under /tmp/ryu/ .

  1. Start the controller on switch host:
    switch:~$ cd /tmp/ryu/
    switch:~$ PYTHONPATH=. ./bin/ryu-manager nat.py
    

You should see output similar to following log after the switch is connected to the controller

loading app nat.py
loading app ryu.controller.dpset
loading app ryu.controller.ofp_handler
instantiating app ryu.controller.dpset of DPSet
instantiating app ryu.controller.ofp_handler of OFPHandler
instantiating app nat.py of NAT
switch connected <ryu.controller.controller.Datapath object at 0x2185210>

which means we ask the switch to forward ARP and Ipv6 packets normally, and only perform NAT for TCP, UDP and ICMP.

  1. On outside, we start a nc server:
    outside:~$ nc -l 6666
    

and we start a nc client on inside1 to connect it:

inside1:~$ nc 128.128.128.2 6666
  1. Now send message between each other and try the same thing between outside and inside2.
  1. On the terminal of switch, in which you started your controller, you should see a log similar to:
    Created mapping 192.168.0.3 31596 to 128.128.129.1 59997
    

Note that there should be only one log per connection, because the rest of the communication will re-use the mapping.

2.2 Outside source

I know you hate this part. I'm just putting it here because this example is too short.

You may be wondering whether it will behave the same if we use insideX hosts to be the nc server. You can try it and the answer is no. That's due to the nature of dynamic NAT.

However, it will work if we can access the translation table on the switch.

  1. Look back into the log we got previously:
    Created mapping 192.168.0.3 31596 to 128.128.129.1 59997
    

Now we know there is mapping between these two pairs.

  1. Now we start a nc server on inside2 (inside1 if your mapping shows 192.168.0.2) on the according port:
    inside2:~$ nc -l 31596
    
  1. Then on outside, we start a nc client:
    outside:~$ nc 128.128.128.1 59997
    
  1. outside and inside2 should be able to send messages to each other.
  1. Common solution of handling outside source is providing some way to manually create mapping in advance. We will leave it as an exercise for you to implement it.

3 Handle ARP and ICMP

One of very common mistakes that people make, when writing OF controller, is forgetting to handle ARP and ICMP message and finding their controller does not work as expected.

3.1 ARP

As we mentioned before, we should insert rules into the OF switch that allow ARP packets to go through, probably after the switch is connected.

3.2 ICMP

Handling ARP is trivial as NAT does not involve ARP. However, it's not the case for ICMP. If you only process translation for TCP/UDP, you will find you cannot ping between outside and insideX while nc is working properly. Handling ICMP is even not as straightforward as for TCP/UDP. Because for ICMP, you cannot get port information to bind with. Our provided solution makes use of ICMP echo identifier. You may come up with different approach involves ICMP sequence number or others.

  1. On inside1, start a ping to outside.
    inside1:~$ ping 128.128.128.2
    
  1. Do the same thing on inside2.
    inside2:~$ ping 128.128.128.2
    

You should see both pinging are working.

  1. On outside, use tcpdump to check the packets it receives.
    outside:~$ sudo tcpdump -i eth1
    

You should see it's receiving two groups of icmp packets, differentiated by icmp_id.

Note that, again, you cannot start the ping from the outside, similar to TCP/UDP. The common solution is to manually map the ping destination to a specific inside IP in advance. We will leave it as an exercise for you to implement it, as well.

Next: Teardown Experiment

Attachments (1)

Download all attachments as: .zip