Okay, an update on my latest post on HAProxy. It turns out that the issue had nothing to do with HAProxy at all, but that the clustering on the back-ends was not working properly.
So, I have reverted the HAProxy back to its original configuration, prior to getting into debug mode.
Note: The Stats page in HAProxy, by the way, is an invaluable way to help see if your proxy is working correctly. You can see the load distribution, you can see the back-end health checks, etc. Plus the GUI is simple, informative and intuitive.
Below is a quick look at the front and back end configurations. In this configuration, we are using HAProxy as a Reverse Proxy, where we terminate incoming requests (SSL Termination), and rather than forward to our back-ends in the clear, we utilize back-end certificates to proxy to the back-ends using SSL. This is a bit unconventional perhaps in a typical reverse proxy scenario.
#---------------------------------------------------------------------
# ssl frontend
#---------------------------------------------------------------------
frontend sslfront
mode tcp
option tcplog
bind *:443 ssl crt /etc/haproxy/cert/yourcert.pem
default_backend sslback
#---------------------------------------------------------------------
# sticky load balancing to back-end nodes based on source ip.
#---------------------------------------------------------------------
backend sslback
# redirects http requests to https which makes site https-only.
#redirect scheme https if !{ ssl_fc }
mode http
balance roundrobin
option ssl-hello-chk
option httpchk GET /ping
http-check expect string TESTPING
stick-table type ip size 30k expire 30m
stick on src
#stick-table type binary len 32 size 30k expire 30m
#stick on ssl_fc_session_id
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server servername01 192.168.20.10:443 ssl check id 1 maxconn 1024
server servername02 192.168.20.11:443 ssl check id 2 maxconn 1024
server servername03 192.168.20.12:443 ssl check id 3 maxconn 1024
No comments:
Post a Comment