= OpenFlow NAT Router = In this section, we are going to build a router for a network with a private address space that needs a one-to-many NAT (IP Masquerade) for some reason (e.g. short on public IP space or security) using OpenFlow. As shown in the figure below, hosts `inside1` and `inside2` are part of the private network, while host `outside` is outside. The LAN has only one public IP — '''128.128.129.1'''. (The external IPs we use, 128.128.128.0/24 and 128.128.129.0/24, are just an example. If your public IP happens to be in this subnet, change them to others.) [[Image(http://groups.geni.net/geni/raw-attachment/wiki/JoeSandbox/OpenFlowNATExample/Execute/openflow-nat.png, 50%, nolink)]] == 1 Test reachability before starting controller == === 1.1 Login to your hosts === To start our experiment we need to ssh into all of our hosts. Depending on which tool and OS you are using there is a slightly different process for logging in. If you don't know how to SSH to your reserved hosts take a look in [wiki:HowTo/LoginToNodes this page.] Once you have logged in, follow the rest of the instructions. === 1.1a Install some software === On the NAT node run: {{{ sudo apt-get install python-pip python-dev libxml2-dev libxslt-dev zlib1g-dev wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python sudo pip install debtcollector sudo pip install oslo.config }}} === 1.2 Test reachability === a. First we start a ping from `inside1` to `inside2`, which should work since they are both inside the same LAN. {{{ inside1:~$ ping 192.168.0.3 -c 10 }}} b. Then we start a ping from `outside` to `inside1`, which should timeout as there is no routing information in its routing table. You can use `route -n` to verify that. {{{ outside:~$ ping 192.168.0.2 -c 10 }}} c. Similarly, we cannot ping from `insideX` to `outside`. d. You can also use Netcat (`nc`) to test reachability of TCP and UDP. The behavior should be the same. == 2 Start controller to enable NAT == === 2.1 Access a server from behind the NAT === You can try to write your own controller to implement NAT. However, we've provided you a functional controller, which is a file called `nat.py` under `/tmp/ryu/` . a. Start the controller on `NAT` host: {{{ nat:~$ cd /tmp/ryu/ nat:~$ ./bin/ryu-manager nat.py }}} You should see output similar to following log after the switch is connected to the controller {{{ loading app nat.py loading app ryu.controller.dpset loading app ryu.controller.ofp_handler instantiating app ryu.controller.dpset of DPSet instantiating app ryu.controller.ofp_handler of OFPHandler instantiating app nat.py of NAT switch connected }}} b. On `outside`, we start a nc server: {{{ outside:~$ nc -l 6666 }}} and we start a nc client on `inside1` to connect it: {{{ inside1:~$ nc 128.128.128.2 6666 }}} c. Now send message between each other and try the same thing between `outside` and `inside2`. d. On the terminal of `switch`, in which you started your controller, you should see a log similar to: {{{ Created mapping 192.168.0.3 31596 to 128.128.129.1 59997 }}} Note that there should be only one log per connection, because the rest of the communication will re-use the mapping. {{{ #!comment === 2.2 Outside source === You may be wondering whether it will behave the same if we use `insideX` hosts to be the nc server. You can try it and the answer is no. That's due to the nature of dynamic NAT. However, it will work if we can access the translation table on the switch. a. Look back into the log we got previously: {{{ Created mapping 192.168.0.3 31596 to 128.128.129.1 59997 }}} Now we know there is mapping between these two pairs. b. Now we start a nc server on `inside2` (`inside1` if your mapping shows 192.168.0.2) on the according port: {{{ inside2:~$ nc -l 31596 }}} c. Then on `outside`, we start a nc client: {{{ outside:~$ nc 128.128.128.1 59997 }}} d. `outside` and `inside2` should be able to send messages to each other. e. Common solution of handling outside source is providing some way to manually create mapping in advance. We will leave it as an exercise for you to implement it. }}} == 3 Handle ARP and ICMP == One of very common mistakes that people make, when writing OF controller, is forgetting to handle ARP and ICMP message and finding their controller does not work as expected. === 3.1 ARP === As we mentioned before, we should insert rules into the OF switch that allow ARP packets to go through, probably after the switch is connected. === 3.2 ICMP === Handling ARP is trivial as NAT does not involve ARP. However, it's not the case for ICMP. If you only process translation for TCP/UDP, you will find you cannot ping between `outside` and `insideX` while nc is working properly. Handling ICMP is even not as straightforward as for TCP/UDP. Because for ICMP, you cannot get port information to bind with. Our provided solution makes use of ICMP echo identifier. You may come up with different approach involves ICMP sequence number or others. a. On `inside1`, start a ping to `outside`. {{{ inside1:~$ ping 128.128.128.2 }}} b. Do the same thing on `inside2`. {{{ inside2:~$ ping 128.128.128.2 }}} You should see both pinging are working. c. On `outside`, use `tcpdump` to check the packets it receives. {{{ outside:~$ sudo tcpdump -i eth1 -n icmp }}} You should see it's receiving two groups of icmp packets, differentiated by id. = [.. Return to the main page] =