HA Proxy is a powerful and reliable load balancing software. I am using HA Proxy for more than five years.

Haproxy-logo

I have three API servers each can handle 100 requests at a time. I have used HA Proxy to load balance the requests between these three. Recently I have faced an issue due to large number of requests. The request rate came in the range of 1000 and it was equally splitted by the HA Proxy and these were sent to the web servers. Because of large number of  requests, the API servers went down. This issue happened multiple times and I finally implemented a limit in the HA Proxy to limit the maximum number of connections. It is just a small configuration change to enable this feature.

backend servers
           server appserver01 192.168.0.100:80 check maxconn 100
           server appserver02 192.168.0.101:80 check maxconn 100
           server appserver03 192.168.0.102:80 check maxconn 100

The usage of maxconn parameter will limit the number of connections that can be handled by that particular server at any point of time. Anything beyond this number will be queued by the HA Proxy. Here each of the backend servers is capable of handling 100 requests at a time. So the servers in total can handle 300 requests at any point. Anything beyond 300 will be queued and it was to wait till it gets a free slot in any of the backend servers.

There is also another important property timeout queue that needs to be set while configuring this max connection limit. This will set the maximum time in which a request will wait in the queue before it gets timed out. An example is given below.

backend servers
      timeout queue 10s
      server appserver01 192.168.0.100:80 check maxconn 100
      server appserver02 192.168.0.101:80 check maxconn 100
      server appserver03 192.168.0.102:80 check maxconn 100

Here the queued requests will wait for 10 seconds and if it is not getting a chance to enter into any of the servers, it will get timeout.

Advertisement