Configuring LVS-TUN for web service

There are many way to spread load to server farm, but I think LVS-TUN can do the best job. Since LVS-TUN will forward the requests to REAL servers and they response client request through their own public interface. Also REAL servers can locate in different data center or even different Geographical location. This mechanism can prevent uplink flooding, compare to LVS-NAT and LVS-DR that cannot work outside a physical network segment.

Our sample config is a startup model with two failover load balancers running ldirectord, heartbeat and LVS-TUN IP-IP tunneling configured on two web nodes in different datacenter.

Requirement

You’ll need a virtual IP address (VIP) that can float between load balancers, and setup in all web nodes. The way to work with that VIP is that, load balancers will receive traffic on VIP and forward request to a web node, and it will reply using that IP address as a source. Heartbeat will determine which load balancer take control of VIP at one time. Web nodes need to configure silent and won’t send ARP.
Install necessary packages

# yum -y install \
heartbeat.i386 \
heartbeat-ldirectord.i386 \
heartbeat-pils.i386 \
heartbeat-stonith.i386 \
ipvsadm.i386

Configuring the load balancers

Copy default config files…

[[email protected]_lvs ~]# cp /usr/share/doc/heartbeat-2.1.3/ha.cf /etc/ha.d/
[[email protected]_lvs ~]# cp /usr/share/doc/heartbeat-2.1.3/authkeys /etc/ha.d/
[[email protected]_lvs ~]# cp /usr/share/doc/heartbeat-2.1.3/haresources /etc/ha.d/
[[email protected]_lvs ~]# cp /usr/share/doc/heartbeat-ldirectord-2.1.3/ldirectord.cf /etc/ha.d/

Setup ldirectord

#cat >>/etc/heartbeat/ldirectord.cf<<EOF
checktimeout=10
checkinterval=2
autoreload=yes
logfile="local0"
quiescent=yes


virtual=1.2.3.4:80
real=192.168.0.10:80 ipip
real=172.19.0.10:80 ipip
service=http
request="httpcheck.html"
receive="OK"
scheduler=rr
protocol=tcp
checktype=negotiate

Create a html file called httpcheck.html, content is “OK”. ldirectord will GET /httpcheck.html and see if it receive “OK” every two seconds, and it will drop the REAL server after 10 seconds disconnect. Noted that, the real ip is different subnet, i.e. real server don’t need to sit together in once place, one could be in USA and another one could be in Singapore. Load balance type was simply round-robin style.

Setup Heartbeat
We will have ldirectord to takecare of LVS, so tell heartbeat, lb01 own ldirectord.

# cat>>/etc/ha.d/haresources<<
lb01.xyzdomain.net 1.2.3.4/24/eth0 ldirectord::/etc/ha.d/ldirectord.cf

Be sure to replace 1.2.3.4 with VIP. and start heartbeat yet.

Traffic will come into VIP (1.2.3.4) and be redirected via IP-IP tunneling to the two real servers. ldirectord will start together with heartbeat of first load balancer and VIP will mount against setting in /etc/ha.d/haresources. You can check status by running ipvsadm -L.


# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP www.xyzdomain.net:http rr
-> www1.localnet:http Tunnel 10 32 43
-> www2.foreignnet:http Tunnel 10 18 42

And now is time to configure web nodes.

 

Setup the web nodes

A web server and arptables tools is needed. Of course, it works with nginx, lighttpd or Cherokee web server too.

yum -y install httpd arptables_jf

Quick setting up the health checks for ldirectord.

# echo "OK" > /var/www/html/healthcheck.html
# /etc/init.d/httpd start
# chkconfig httpd on
# curl localhost/healthcheck.html
[[email protected] ~]# curl localhost/healthcheck.html
OK

For the IP-IP tunneling to terminate properly on your web nodes, you’ll need to set up a tunnel interface:

# cat /etc/sysconfig/network-scripts/ifcfg-tunl0:
DEVICE=tunl0
IPADDR=1.2.3.4
NETMASK=255.255.255.255
BROADCAST=1.2.3.4
ONBOOT=yes

Be sure to replace the 1.2.3.4 with VIP. We’ll need a simple static route for this interface to work properly:

# cat >>/etc/sysconfig/network-scripts/route-tunl0<<EOF
1.2.3.4 dev tunl0
EOF

Be sure to use your virtual IP address in this file. We need to set some kernel parameters for the new tunnel interface in /sbin/ifup-local

#!/bin/sh

if [ $1 = “tunl0” ]; then
echo 1 > /proc/sys/net/ipv4/conf/tunl0/forwarding
fi
echo 0 > /proc/sys/net/ipv4/conf/$1/rp_filter

Make it executable:

# chmod +x /sbin/ifup-local

Disable rp_filters (Reverse Path validation) for CentOS 7

Add 90-disable-rp_filter.conf to /etc/sysctl.d/

# cat /etc/sysctl.d/90-disable-rp_filter.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

Make tunnel interface keep silence

We now need to ensure that the web nodes don’t accidentally send out an ARP reply on the network. This would cause them to steal the IP from the load balancers and it would essentially knock your configuration offline. I do this with arptables:

#arptables -A IN -j DROP -d 1.2.3.4
#arptables -A OUT -j DROP -d 1.2.3.4
#/etc/init.d/arptables_jf save
#chkconfig arptables_jf on

There are some elegant ways to do this within your interface configuration files or via adjusting values in /proc, but I’ve found this method to be both simple and reliable. You should now be able to bring up the new interface safely:

# ifup tunl0

Testing

To test out this configuration, ensure that heartbeat is running on both load balancers and that ipvsadm on the load balancers shows that both of your web nodes are responding. You should be able to pull the healthcheck page through your load balancers with curl:

$ curl http://1.2.3.4/httpcheck.html
OK

If that works, try adding the following line to /etc/httpd/conf/httpd.conf on each web node:

Header add Node “www1”
You’ll want to adjust the name of each node in the configuration file so that you can tell which is which. Reload your apache configuration on each web node and then try again:

# curl -i http://1.2.3.4/httpcheck.html 2>/dev/null | grep ^Node
Node: www1
# curl -i http://1.2.3.4/httpcheck.html 2>/dev/null | grep ^Node
Node: www2
# curl -i http://1.2.3.4/httpcheck.html 2>/dev/null | grep ^Node
Node: www1

As you can see, the load balancers are choosing each web node on a round robin algorithm. You may also want to run tcpdump on the load balancers to ensure that the traffic is coming through the load balancer and then out via the web nodes.

Try running the following on the load balancer while stopping apache on one of the web nodes:

# watch --interval=1 'ipvsadm -L'

You should see the weight column switch from one to zero when a web node isn’t responding properly.

Conclusion

Now you have two redundant load balancers that are constantly monitoring the availability of your web nodes. And your web nodes are configured to accept the encapsulated packets from the load balancers via IP-IP tunnel and then respond to the traffic via their own WAN interfaces.

Leave a Reply

Your email address will not be published. Required fields are marked *