Backend Pools¶
Any request handled by the routing engine and resulting in a forward
action is effectively forwarded to an origin/upstream server. Upstream servers are organized in backend pools, which are groups of servers that all serve the same type of requests and form a load balancing cluster. Load balancing happens across all servers within a backend pool.
Backend pools can be global or per shard. A global pool is a single cluster of servers that handle certain inbound requests. In sharded environments, upstream services like App Suite Middleware can be split up into multiple deployments, called shards. If a request is supposed to be forwarded to a service within a certain shard, an according backend pool must exist per available shard.
Note
Requests that have bypassed the routing engine and have been handled by custom inbound filters can also be forwarded to configured backend pools. The inbound filter implementation is responsible for assigning the according pool in that case.
Configuration¶
Backend pools are configured via backends.yml
. Each pool has a type
attribute, that defines how the actual hosts are determined. The most basic type is static
, which allows to configure backend hosts inline in the pool configuration.
Hint
The hosts within a pool can be set using IPv4 or IPv6 notation. In case that IPv6 notation is used it is mandatory to enclose the address with brackets (see example below).
Note
Whether TLS is used for upstream, connections can only be configured per pool, not per server. Therefore it is enforced that all server URIs within a pool must be configured with the same protocol, i.e. http
or https
.
Static pools¶
global:
- name: 'appsuite-mw'
type: 'static'
hosts:
- 'http://192.168.10.1:8009'
- 'http://192.168.10.2:8009'
- 'http://192.168.10.3:8009'
- 'http://[1234:1234::5678:5678]:8009'
clientConfig:
connectTimeout: 5000
readTimeout: 100000
- name: 'appsuite-ui'
type: 'static'
hosts:
- 'http://192.168.20.1:8009'
- 'http://192.168.20.2:8009'
- 'http://[4321:4321:0:0:0:0:8765:8765]:8009'
Client Configuration¶
The clientConfig
attribute contains a list of parameters that influence the performance of a pool. There is a set of variables which can be overwritten in the routing.yml
file. This way a certain server configuration can differ from the overall pool configuration. Those variables are:
Rewritable per route properties¶
Can be defined in backends.yml
and overwritten in routing.yml
.
- maxConnectionsPerHost
Optional: Defines the connection threshold of a client for a single host.
Default: 50
- connectTimeout
Optional: Maximum time for a connection to be established in ms.
Default: 2000
- readTimeout
Optional: Maximum read duration for a connection in ms.
Default: 5000
- maxAutoRetries
Optional: Maximum number of retries after a failed communication.
Default: 0
- connIdleEvictTimeMilliSeconds
Optional: Maximum time a connection can be idle before it gets evicted in ms.
Default: 30000
- receiveBufferSize
Optional: The size of the receiving buffer in byte.
Default: 32 * 1024
- sendBufferSize
Optional: The size of the sending buffer in byte.
Default: 32 * 1024
Pool wide properties¶
Can be defined in backends.yml
.
- perServerWaterline
Optional: Max amount of connections per server, per event loop.
Default: 4
- maxRequestsPerConnection
Optional: How many requests per connection:
Default: 1000
- tcpKeepAlive
Optional: Trigger prolonging TCP connections.
Default: false
- tcpNoDelay
Optional: Activate Nagles’S noDelay algorithm to reduce bandwidth by reducing the amount of small packages that are sent on their own.
Default: false
- writeBufferHighWaterMark
Optional: The high water mark of the write buffer. If the number of bytes queued in the write buffer exceeds this value, Channel.isWritable() will start to return false.
Default: 32 * 1024
- writeBufferLowWaterMark
Optional: The low water mark of the write buffer. Once the number of bytes queued in the write buffer exceeded the high water mark and then dropped down below this value, Channel.isWritable() will return true again.
Default: 8 * 1024
- autoRead
Optional: Enables and disables channels ‘autoread’ function.
Default: false
- concurrencyMaxRequests
Optional: How many concurrent requests are allowed.
Default: 200
- concurrencyProtectEnabled
Optional: If concurrency protection is activated or not.
Default true
Load Balancing¶
Methods¶
The load balancer is capable of using different configurable methods for server selection per backend pool. This is configured via the loadBalanceMethod
setting.
- round-robin
(default) Chooses the next server on every selection, like
++$select_count % $num_servers
.- connections
Skips servers with “tripped” circuit breaker and picks the server with lowest concurrent requests.
Sticky Sessions¶
With sticky sessions, all requests belonging to the same HTTP session are routed to the same backend node. This is realized based on cookies or query parameters of incoming requests. Cookies must be maintained by the actual backend nodes. By default this feature is disabled but can be enabled for each backend pool. If for an incoming request, no indicator for the actual backend server exists, the configured load balancing method is used to determine the server. The same happens if the server which is responsible for a session becomes unreachable.
Every backend pool can be configured separately to use sticky session mechanisms or not by setting the property named stickySession
to true
.
Note
If sticky session is enabled for a pool, all configured hosts must have a parameter route
set, that matches the route identifier of the according backend.
Effectively, the route identifier must be a suffix of a cookie or query parameter value, separated by a dot: <any-prefix>.<route>
. The cookie name must be JSESSIONID
, the query parameter must be named jsessionid
. An existing cookie wins over the query parameter. If no route identifier could be determined, the standard load balancing mechanisms are applied.
Example configuration:
global:
- name: 'appsuite-mw'
type: 'static'
stickySession: true
loadBalanceMethod: 'connections'
hosts:
- 'http://localhost:8009?route=APP1'
- name: 'appsuite-ui'
type: 'static'
# this is the assumed default:
#stickySession: false
hosts:
- 'http://localhost:80'
Deactivation of servers¶
A distinct server in a pool can be marked as inactive. This will filter those servers from the loadbalancers list of available servers. This feature can be triggered by adding an active
property with value false
to the host query. By default every server without this this property will be considered active.
Note
This mechanism will not trigger any standby servers to take the place of those inactive servers.
Example configuration:
global:
- name: 'appsuite-mw'
type: 'static'
hosts:
- 'http://192.168.10.1:8009'
- 'http://192.168.10.2:8009?active=false'
Hot-standby fail-over¶
Hot-standby load balancing comes into play when all productive servers are not available. In this case all requests are routed to the backup servers which are statically marked as such in the backends.yml
file.
Example configuration:
global:
- name: 'appsuite-mw'
type: 'static'
hosts:
- 'http://192.168.10.1:8009'
- 'http://192.168.10.2:8009'
- 'http://192.168.10.3:8009?standby=true'
- 'http://192.168.10.4:8009?standby=true'
If the primary servers 192.168.10.1:8009
and 192.168.10.2:8009
become both unavailable, requests are only balanced between the standby servers, using the same selection method as for the primaries.
TLS¶
Whether outbound connections will use HTTPS depends on the according backend pool type and configuration.
Outbound HTTPS connections are only established if the TLS certificate of an origin is trusted. Per default, trusted certificates are the ones contained in the JRE default trust store $JAVA_HOME/jre/lib/security/cacerts
. This can be overridden to use a custom trust store or to trust any certificate. Use the latter option only for testing purposes!
Configuration¶
- proxy.tls.client.truststore
Path to the trust store file that contains trusted certificates
Default:
<empty>
Reloadable:
false
EnvVar:
PROXY_TLS_CLIENT_TRUSTSTORE
- proxy.tls.client.truststore.type
Type to the trust store, either
pkcs12
orjks
Default:
pkcs12
Reloadable:
false
EnvVar:
PROXY_TLS_CLIENT_TRUSTSTORE_TYPE
- proxy.tls.client.truststore.pass
Password of the trust store
Default:
<empty>
Reloadable:
false
EnvVar:
PROXY_TLS_CLIENT_TRUSTSTORE_PASS
- proxy.tls.client.trustany
If set to
true
no trust store is used and any connection is established without certificate checking. Do not use in production!Default:
false
Reloadable:
false
EnvVar:
PROXY_TLS_CLIENT_TRUSTANY
- proxy.tls.client.ciphers
Set the cipher suites that should be supported
Default:
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
EnvVar:
PROXY_TLS_CLIENT_CIPHERS
- proxy.tls.client.protocols
Set the protocols that should be supported
Default:
TLSv1.2, TLSv1.1, TLSv1
Reloadable:
false
EnvVar:
PROXY_TLS_CLIENT_PROTOCOLS
Health Checks¶
To track the lifecycle of servers in a specific server pool, periodic healthchecks can be configured on a per server pool base. The health check pings the servers health endpoint and determines if the server is up and running or currently unavailable.
The following properties can be used for configuration in the backends.yml
:
- secure
(Optional) Decides wether to use a secure connection or not.
Default:
"false"
- path
(Optional) The path of the healthcheck
Default:
"/health"
- port
(Optional) The port of the healthcheck.
Default:
The port of the backend pool server
- headers
(Optional) Http headers for the ping.
Default:
<empty>
- periodSeconds
(Optional) The amount of time between each ping.
Default:
30
- timeoutSeconds
(Optional) The amount of time before the ping timeout.
Default:
2
- successThreshold
(Optional) The amount of successful pings to the server before the server is marked as running.
Default:
1
- failureThreshold
(Optional) The amount of failed pings to the server before the server is marked as down.
Default:
1
- pingStrategy
(Optional) The ping strategy which is used for pinging. Either
serial
orconcurrent
.Default:
serial
If you configure the health check to be secure you need to add the tls client properties as described in the section above.
Example:
healthCheck:
secure: false
path: '/health'
port: 8009
headers:
Host: 'mail.example.com'
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
pingStrategy: concurrent