Policy Routing, Split Access, and route affinity issues

Issues related to configuring your network
Post Reply
Monster 184
Posts: 2
Joined: 2015/09/03 13:41:02

Policy Routing, Split Access, and route affinity issues

Post by Monster 184 » 2015/09/03 14:21:30

Howdy folks - this is my first post on this forum so please be gentle :)

I am trying to set up load balanced split access as described here: http://lartc.org/howto/lartc.rpdb.multi ... tml#AEN267 on CentOS 7. (Background: this is an R&D project. I am not actually setting up a live connection that others are depending upon, I am setting up a sandbox to model such a situation, and will likely compare and contrast this approach with some alternatives.) My sandbox is actually two CentOS boxes each with multiple NICs. Suppose each CentOS box has 3 NICs: eth0, eth1, and eth2. The two eth1s would be wired to each other, the two eth2 would be wired to each other. On box #1 eth0 is the uplink and gets its IP (from subnet 192.168.1.0/24) via DHCP. On box #2 eth0 is 192.168.4.1 and has a PC connected to it. Box #1's eth1 and eth2 are 192.168.2.1 and 192.168.3.1 respectively, and Box 2's eth1 and eth2 get their IPs from Box 1's DHCP. I have SNAT (for static IPs) or MASQUERADE (for DHCP-obtained ips) set up for all 6 interfaces via iptables. I have two sandboxes - one where both Box 1 and Box 2 run CentOS 6.7 and one where both run CentOS 7. I have used /etc/sysconfig/network-scripts/route-* and /etc/sysconfig/network-scripts/route-* along with new entries in /etc/iproute2/rt_tables to manage routing and IP rules on all systems.

On the CentOS 7 systems I have:

1) Removed Network Manager, and use the old initconfig method of setting up the network
2) Disabled firewalld and nstalled iptables-services so that I can use iptables in the same manner as on CentOS 6

Now, to my question: The behavior is very different between CentOS 6 and CentOS 7.

In CentOS 6, everything works great... the PC connected to eth0 on Box 2 can browse the web, I can ping end to end or between any two hosts in the system. All packets for a given tracked connection wind up going over the same route. My understanding is that the route cache is responsible for this, as opposed to connection tracking, but if I am wrong please tell me.

In CentOS 7, each packet within a connection gets routed separately, in the round-robin fashion set up via the load balancing described in the lartc.org article. (Again, in CentOS 6, only the first "state NEW" packets in each connection appears to be load-balanced... subsequent "state ESTABLISHED" packets appear to have affinity to the original route).

Is there a way to instruct the kernel to provide the behavior I observed in CentOS 6, on CentOS 7?

BTW, I *have* tried allowing the new behavior to prevail, and disabling rp_filter as described here: https://access.redhat.com/solutions/53031 such that packets are allowed to "come and go as the please" on either link. That kind of sort of works... I do see occasional lost ICMP packets when I do say a 100 ping test, and likewise web browsing slows down in a way that suggests retransmits are happening. I would also like to be able to pin, for example, a UDP stream to a given link if I decide to use this approach in a real environment, in order to decrease the probably of packet reordering.

I am aware that 7's kernel no longer has the route cache (http://article.gmane.org/gmane.linux.network/238256)... I am guessing this is the driver behind the change...

Any thoughts appreciated! Please LMK if there is any further info I can provide.

Monster 184
Posts: 2
Joined: 2015/09/03 13:41:02

[Solved] Policy Routing, Split Access, and route affinity is

Post by Monster 184 » 2015/09/04 00:56:04

OK, I figured out my problem... most of my NAT rules were completely unnecessary. In fact the only one I needed was at Box #1's uplink. The reason I had them in the first place, was that I had a number of port-based FORWARD rules on the iptables filter chain... and a default policy of DROP... and packets NATed by the one, necessary NAT were getting dropped at the hops from Box 1 to Box 2. I simply set iptables filter FORWARD to have a default policy of ACCEPT, got rid of all the rules in the table, and got rid of all the unnecessary NAT, and my sandbox is once again functional. Quite the learning experience indeed!

Post Reply