Redis deprecated
For different use cases, App Suite middleware uses the Redis in-memory database server. This article describes the use cases and requirements, as well as possible configuration options and modes, focusing on the middleware component. For details regarding deployment and operation, please refer to the Deployment Guide.
Requirements
The middleware connects via the RESP protocol to a configured Redis endpoint. Either Standalone, Cluster or Sentinel setups are supported, see Operation Modes below for further details.
Redis
As caching relies on individual TTLs on hash fields (HEXPIRE
), the minimum supported Redis version is 7.4.
Persistence is not mandatory and typically not used, instead, high availability of e.g. session data is achieved through redundant deployment modes (Redis Sentinel or Redis Cluster). However, depending on the number of served clients and usage patterns, a decent amount of memory needs to be assigned to the Redis pods so that no data is discarded unintentionally.
Also without persistent storage, the main Redis instance should be deployed using a StatefulSet in Kubernetes.
Since Redis is required both for volatile cache data and other things like client sessions that need to be reliably available, it is also possible (and recommended) to configure a separate Redis instance just for caching purposes, for which different settings can be used.
Shared Access
It is possible that the same Redis service is shared with other services beyond App Suite middleware. All keys follow a fixed naming scheme beginning with a common prefix like ox-cache-
, ox-map-
or ox-lock-
. However, for separation purposes, it is still highly recommended to provide a distinguished Redis instance, especially to not waste the resources required for Redis operation in Sentinel mode for services that don't need this level of redundancy.
Also, in Active/Active deployments with datacenters at multiple sites, there can be situations where the whole Redis database is flushed, which might lead to unexpected consequences for other services.
Configuration
To ease deployment of the Redis pods, a common App Suite stack chart for Redis is available. See the included README article for details.
Middleware configuration is mostly performed through properties with prefix com.openexchange.redis.
. The redis
section of the Core Middleware Chart only provides the most essential values, which is the operation mode and endpoint configuration. The following chapters describe certain aspects explicitly. Please see the property documentation for the full list of available options.
Operation Modes
Redis can be used by the middleware in three different operation modes. Because user sessions and other important data are also held in Redis, the Sentinel operation mode is recommended to ensure high availability and fault tolerance. In contrast, if a dedicated Redis instance is used for caches, this can safely be configured in Standalone mode.
- Standalone
Connects to a single Redis instance. - Cluster
Connects to Redis Cluster with multiple master/slave nodes. Topology is refreshed every 10 minutes. - Sentinel
Connects to Redis Sentinel, optionally using preferred read from replica.
Compression
App Suite middleware stores most data in Redis as JSON strings. In order to reduce their size, it is possible to configure a compression strategy globally for values stored by the middleware. This can be done by configuring a compression type, as well as a threshold after which values are being compressed prior storing them in Redis. Example:
com.openexchange.redis.compressionType = deflate
com.openexchange.redis.minimumCompressionSize = 256
Connection Pool
The middleware connects via TCP to Redis, managing the resources in a local pool of connections. Depending on the usage scenarios, the connection pool might need to be adjusted for e.g. even more concurrently used connections. Unless overridden, the connection pool is configured with these sane defaults (see the property documentation for more details):
com.openexchange.redis.connection.pool.maxTotal = 100
com.openexchange.redis.connection.pool.maxIdle = 100
com.openexchange.redis.connection.pool.minIdle = 0
com.openexchange.redis.connection.pool.maxWaitSeconds = 2
com.openexchange.redis.connection.pool.minIdleSeconds = 60
com.openexchange.redis.connection.pool.cleanerRunSeconds = 60
Circuit Breaker
Optionally, a circuit breaker can be enabled on the middleware pods, which blocks any access to Redis for a certain time after an error situation was repeatedly identified. The circuit breaker can be enabled and configured through the properties with prefix com.openexchange.redis.breaker
.
Monitoring
Besides monitoring data for Redis itself (e.g. through the Prometheus exporter available in App Suite Redis stack chart), middleware core also exposes some valuable data at the /metrics
endpoint.
- Connection Pool
See metrics beginning withappsuite_redis_connections_
- Commands
See metrics beginning withlettuce_command_
Besides these general/low-level metrics, there is also use-case-specific monitoring data available - see following sections for details.
Use Cases
As mentioned before, Redis is used for different features of the middleware.
Caching
Various data is held in caches for quick access, and to reduce the load from the persistent storages. In contrast to previous generations of App Suite, where each middleware node used an own caching layer based on JCS and an invalidation channel on top of Hazelcast, the main caches now directly reside within Redis, and are equally available to all middleware pods at any time.
Configuration
It is possible (and recommended) to configure a separate Redis instance for volatile cache data by using "cache" as infix of the property base names, e.g. com.openexchange.redis.cache.mode
. Awareness for this special Redis instance for cache data is enabled through com.openexchange.redis.cache.enabled
.
If a separate Redis instance is used for caches, it is typically sufficient to use it in Standalone mode, as data held there can always be reconstructed from the persistent storage layer. Due to the same reason, eviction of entries is not problematic once a defined memory limitation is reached, so that an arbitrary memory management configuration can be set for Redis.
For example, to start a Redis instance without persistence/snapshotting, a maximum memory of 2GB with the policy to evict any key using approximated LRU:
maxmemory 2GB
maxmemory-policy allkeys-lru
appendonly no
save ""
Please see the self-documented Redis configuration file for further details.
Semantics
Basically, App Suite middleware uses the cache-aside pattern, where the cache is populated on demand once the data is requested and has been loaded from the persistent storage (i.e. the database). Consecutive read operations will then use the previously cached values. Entries in the cache are invalidated explicitly once the associated data is changed through one of the APIs, so that the next read attempt will read the updated data from the persistent storage and populate the cache again.
All cache data put into Redis uses keys prefixed with ox-cache:
. By default, all entries are decorated with a TTL of 3600 seconds (one hour), after which they are evicted automatically from the cache. This default duration can be adjusted through property com.openexchange.cache.v2.defaultExpirationSeconds
, however, there are still cases where the implementation determines the expiration time statically.
Besides the cache in Redis, a very thin, thread-local cache is used within the scope of a served client request, remembering data that has already been loaded from Redis in memory to speed up repeated accesses to the same keys. This is enabled by default, but can be controlled via property com.openexchange.cache.v2.redis.threadLocal
.
Sizing
Using a distributed caching solution means on the one hand that the memory demand of middleware pods decreases, but on the other hand, since the data now resides in Redis, the memory requirements increases there accordingly. If a dedicated Redis instance for cache data is used (see configuration above), memory should be assigned to this Redis cache pod so that eviction due to the defined maximum memory policy of Redis does not happen too often. However, if no dedicated instance is available for caching, one wants to absolutely avoid this so that no unrecoverable data gets lost.
During tests with synthetic data, we observed a memory consumption of about 10GB for 1M active user sessions:
- This covers data held in caches only (no sessions etc.)
- 10GB as per Redis metric
used_memory_human
, which is different from the memory allocated to Redis from the OS - Active means that the client performed at least one request to App Suite middleware within a period of 12 minutes (in default configuration)
- Measured in a simulation with synthetic user account data only; for real accounts (with e.g. lots of folders, attributes etc.), this will look different
Due to the described inaccuracies, this should only be seen as very rough and minimal estimation, and it is therefore still important to keep an eye on the relevant monitoring data, and adjust the memory- and eviction-related settings as needed.
Monitoring
Of course the most important metrics are generated by the Redis pods themselves. See e.g. the App Suite stack chart for Redis for how to enable the Prometheus exporter for Redis.
Besides these server-side monitoring options, also some metrics can be collected from the App Suite middleware's client perspective. Cache metrics can be enabled through the following property:
com.openexchange.cache.v2.redis.withMetrics=true
This yields statistics about cache hit/miss ratio, as well as put/get/remove operation runtimes. The relevant metrics begin with prefix appsuite_redis_cache_
.
Similar metrics are also available for the thread-local caching layer, which can be enabled via:
com.openexchange.cache.v2.redis.metrics.threadLocal=true
These metrics are prefixed with appsuite_redis_cache_threadlocal_
.
Publish / Subscribe
For inter-node communication and notification, App Suite middleware leverages the Pub/Sub feature of Redis. Use cases include:
- Invalidation Messages
To inform all nodes about changes that require invalidating locally held data, like cached JSlobs. - Push Subscriptions
Broadcast changes to locally registered push subscribers - Session Events
Inform other nodes about removed sessions
Session Storage
Besides pure caching, the Redis is most prominently used by the middleware as Session Storage. Therefore, a separate documentation article is available at Redis Session Storage.
Distributed Maps
Besides the data held in ordinary caches, another abstraction layer in App Suite middleware makes use of Redis distributed maps to provide access to certain shared data, so that it is equally accessible for all middleware nodes. Use cases include:
- State Management
Certain OpenID- or SAML workflows need to persist certain information of typical login-or logout-flows between client and authorization server. - Push Listener
Management of registered push listeners. - OAuth Callbacks
Certain OAuth flows need to remember a callback URL where the client is redirected to. - Rate Limiting
Controls the maximum allowed permits per timeframe for different use cases like SMS transport. - Token Login
Store tokens or reserved sessions to support special login flows.
Distributed Locks
For mutual exclusion among middleware nodes, distributed locks for different usage scenarios are modeled based on Redis, using keys prefixed with ox-lock
.