Version 18 (modified by 8 years ago) (diff) | ,
---|
OpenFlow NAT Router
In this section, we are going to build a router for a network with a private address space that needs a one-to-many NAT (IP Masquerade) for some reason (e.g. short on public IP space or security) using OpenFlow. As shown in the figure below, hosts inside1
and inside2
are part of the private network, while host outside
is outside. The LAN has only one public IP — 128.128.129.1. (The external IPs we use, 128.128.128.0/24 and 128.128.129.0/24, are just an example. If your public IP happens to be in this subnet, change them to others.)
1 Test reachability before starting controller
1.1 Login to your hosts
To start our experiment we need to ssh into all of our hosts. Depending on which tool and OS you are using there is a slightly different process for logging in. If you don't know how to SSH to your reserved hosts take a look in this page. Once you have logged in, follow the rest of the instructions.
1.1a Install some software
On the NAT node run:
wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python sudo pip install oslo.config
1.2 Test reachability
- First we start a ping from
inside1
toinside2
, which should work since they are both inside the same LAN.inside1:~$ ping 192.168.0.3 -c 10
- Then we start a ping from
outside
toinside1
, which should timeout as there is no routing information in its routing table. You can useroute -n
to verify that.outside:~$ ping 192.168.0.2 -c 10
- Similarly, we cannot ping from
insideX
tooutside
(128.128.128.2).
- You can also use Netcat (
nc
) to test reachability of TCP and UDP, as in the OpenFlow Firewall tutorial. The behavior should be the same (no connection).
2 Start controller to enable NAT
2.1 Access a server from behind the NAT
You can try to write your own controller to implement NAT. However, we've provided you a functional controller, which is a file called nat.py
under /tmp/ryu/
.
- Start the controller on
NAT
host:nat:~$ cd /tmp/ryu/ nat:~$ ./bin/ryu-manager nat.py
You should see output similar to following log after the switch is connected to the controller
loading app nat.py loading app ryu.controller.dpset loading app ryu.controller.ofp_handler instantiating app ryu.controller.dpset of DPSet instantiating app ryu.controller.ofp_handler of OFPHandler instantiating app nat.py of NAT switch connected <ryu.controller.controller.Datapath object at 0x2185210>
- On
outside
, we start a nc server:outside:~$ nc -l 6666
and we start a nc client on inside1
to connect it:
inside1:~$ nc 128.128.128.2 6666
- You should now be able to type a message in either nc session and see it appear in the other session. Now, try the same thing between
outside
andinside2
.
- On
nat
, where you started the controller, you should see a log similar to:Created mapping 192.168.0.3 31596 to 128.128.129.1 59997
Note that there should be only one log per connection, because the rest of the communication will re-use the mapping.
3 Handle ARP and ICMP
One common mistake experimenters make, when writing OpenFlow controllers, is forgetting to handle ARP and ICMP message causing their controllers to not work as expected.
Handling ARP is trivial in this example, as NAT does not involve ARP. However, that is not the case for ICMP. If you only process translations for TCP/UDP, you will find you cannot ping between outside
and insideX
, though nc is working properly. Handling ICMP is not as straightforward as for TCP/UDP because for ICMP you cannot get port information to bind with. Our provided solution makes use of the ICMP echo identifier. You could come up with different approach using ICMP sequence numbers, etc. To see ICMP working do the following.
- On
inside1
, start pingingoutside
.inside1:~$ ping 128.128.128.2
- Do the same thing on
inside2
.inside2:~$ ping 128.128.128.2
You should see both ping commands are working.
- On
outside
, usetcpdump
to check the packets it receives.outside:~$ sudo tcpdump -i eth1 -n icmp
Notice that it is receiving two groups of icmp packets, differentiated by id.