Reverse Proxy with HAProxy

X-Forwarded-Port and X-Forwarded-Proto Headers

Add the following to the backend configuration:

backend nodes
    …
    option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    …

This configuration is for loadbalancing, but the relevant lines for us are:

  • http-request set-header X-Forwarded-Port %[dst_port]

    • We manually add the X-Forwarded-Port header so that our applications knows what port to use when redirecting/generating URLs. Note that we use the dst_port "destination port" variable, which is the destination port of the client HTTP request.

  • http-request add-header X-Forwarded-Proto https if { ssl_fc }

    • We add the X-Forwarded-Proto header and set it to https if the https scheme is used over http (via ssl_fc). Similar to the forwarded-port header, this can help our web applications determine which scheme to use when building URIs and sending redirects (Location headers).

  • It’s also recommended to set option forwardfor as in the example

The Host-header will be set by the client.

On the Tomcat side the RemoteIpValve needs to be configured to read the values from those two headers

Performance Tuning

Parallel Number of Connections

In the global section set the maximum number of connections higher, for example:

globals
  …
  maxconn 5000
  …

Connection Timeout

This is the maximum time to wait for a connection attempt to Cadenza Web to succeed. Since in high load scenarios we may need a bit longer to respond, this should not be too low. For Cadenza Web the following may be sensible:

defaults
  timeout connect 10s

Client Timeout

This is the time within which the client is expected to acknowledge or send data, basically the maximum time to wait until we get some headers from the client.

defaults
  timeout client 1m

Server Timeout

This is the maximum time we are willing to wait to get something from the server (Cadenza Web). Since some operations in Cadenza we can be quite long-running, depending on user queries, we need to make this appropriately long. This should be about the time we are willing to realistically wait for a response from Cadenza:

defaults
  timeout server 2m