There are many way to spread load to server farm, but I think LVS-TUN can do the best job. Since LVS-TUN will forward the requests to REAL servers and they response client request through their own public interface. Also REAL servers can locate in different data center or even different Geographical location. This mechanism can prevent uplink flooding, compare to LVS-NAT and LVS-DR that cannot work outside a physical network segment.
Our sample config is a startup model with two failover load balancers running ldirectord, heartbeat and LVS-TUN IP-IP tunneling configured on two web nodes in different datacenter.
You’ll need a virtual IP address (VIP) that can float between load balancers, and setup in all web nodes. The way to work with that VIP is that, load balancers will receive traffic on VIP and forward request to a web node, and it will reply using that IP address as a source. Heartbeat will determine which load balancer take control of VIP at one time. Web nodes need to configure silent and won’t send ARP.
Install necessary packages
# yum -y install \
Configuring the load balancers
Copy default config files…
[[email protected]_lvs ~]# cp /usr/share/doc/heartbeat-2.1.3/ha.cf /etc/ha.d/
[[email protected]_lvs ~]# cp /usr/share/doc/heartbeat-2.1.3/authkeys /etc/ha.d/
[[email protected]_lvs ~]# cp /usr/share/doc/heartbeat-2.1.3/haresources /etc/ha.d/
[[email protected]_lvs ~]# cp /usr/share/doc/heartbeat-ldirectord-2.1.3/ldirectord.cf /etc/ha.d/
Create a html file called httpcheck.html, content is “OK”. ldirectord will GET /httpcheck.html and see if it receive “OK” every two seconds, and it will drop the REAL server after 10 seconds disconnect. Noted that, the real ip is different subnet, i.e. real server don’t need to sit together in once place, one could be in USA and another one could be in Singapore. Load balance type was simply round-robin style.
Setup Heartbeat Continue reading