Operator Guide deprecated
Introduction
In our application architecture, maintaining reliable and fault-tolerant central session storage is crucial for ensuring a seamless user experience. To achieve this, we use a Redis Operator with Kubernetes. This particular Redis Operator provides a RedisFailover
object that enables automated management of Redis Sentinels and Redis replicas, ensuring high availability and data integrity. This guide outlines the steps to deploy and manage RedisFailover
in a Kubernetes environment, specifically tailored to serve as a resilient central session storage solution for our application.
IMPORTANT: We strongly recommend using the Redis Sentinel mode in production environments with at least three Redis Sentinels and Redis instances to ensure the highest level of availability and reliability for the central session storage.
Installation
To create RedisFailover
object within a Kubernetes cluster, the operator must be deployed, so please follow the official installation guidelines provided in the Redis Operator GitHub repository.
- Verify that the Redis Operator installation was successful:
$ kubectl get pods -n redis-operator
NAME READY STATUS RESTARTS AGE
redis-operator-7bd99fdfbc-nr5k9 1/1 Running 0 7d4h
Configuration
Deploy RedisFailover
After deploying the Redis operator, a new API is available. It's now possible to create, update, and delete RedisFailover
objects.
This sample YAML manifest defines a RedisFailover
object that includes settings for Redis Sentinel and Redis replicas, exporters, resource requests and limits, and storage settings. The exporter settings allow for optional integration with monitoring tools.
### Metadata
apiVersion: databases.spotahome.com/v1
kind: RedisFailover
metadata:
name: session-storage
### Specification
spec:
### Redis Sentinel Configuration
sentinel:
# Specifies that there should be three replicas of the Redis Sentinel instances for redundancy and fault tolerance.
replicas: 3
# Docker image for Redis Sentinel.
image: redis:7-alpine
exporter:
# Enables Redis exporter.
enabled: true
# Docker image for Redis Sentinel exporter.
image: leominov/redis_sentinel_exporter:1.7.1
# CPU/Memory resource requests/limits
resources:
requests:
cpu: 100m
limits:
memory: 100Mi
### Redis Configuration
redis:
# Specifies three replicas of Redis for high availability and load distribution.
replicas: 3
# Docker image for Redis.
image: redis:7-alpine
exporter:
# Enables Redis exporter.
enabled: true
image: oliver006/redis_exporter:alpine
# Additional arguments for the Redis exporter.
args:
- --web.telemetry-path
- /metrics
# Specifies the log format for the Redis exporter.
env:
- name: REDIS_EXPORTER_LOG_FORMAT
value: txt
# CPU/Memory resource requests/limits
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
memory: 500Mi
storage:
# Retains data in Persistent Volume Claim (PVC) even if the RedisFailover instance is deleted.
keepAfterDeletion: true
persistentVolumeClaim:
metadata:
# Sets the name of the Persistent Volume Claim to "session-storage-pvc"
name: session-storage-pvc
spec:
# Allows the volume to be mounted as read-write by a single node.
accessModes:
- ReadWriteOnce
resources:
requests:
# Specifies a storage size of 1 gigabyte for each Redis instance.
storage: 1Gi
Note: Adjustments can be made based on specific deployment requirements.
Monitoring
Prometheus Monitoring
Using the Redis Operator, it's possible to define metrics exporter to expose metrics from Redis Sentinels and Redis instances in Prometheus format. There are two tools, Prometheus Redis Metrics Exporter and Prometheus Redis Sentinel Metrics Exporter, to collect and expose detailed Redis and Redis Sentinel metrics for monitoring and analysis.
Note: Ensure proper configuration to align with your monitoring requirements.
ServiceMonitor
Once the exporter is configured, we may need to update Prometheus to monitor this endpoint. For the Prometheus Operator, we have to create a CRD
based object called ServiceMonitor
.
Here is an example YAML manifest of what these two ServiceMonitors might look like:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-exporter
spec:
selector:
matchLabels:
app.kubernetes.io/name: session-storage
app.kubernetes.io/component: redis
endpoints:
- targetPort: 9121
path: /metrics
interval: 15s
namespaceSelector:
matchNames:
- monitoring
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: sentinel-exporter
spec:
selector:
matchLabels:
app.kubernetes.io/name: session-storage
app.kubernetes.io/component: sentinel
endpoints:
- targetPort: 9355
path: /metrics
interval: 15s
namespaceSelector:
matchNames:
- monitoring
Grafana Dashboard
There are two dashboards for monitoring the statistics:
Helm Chart Integration
To configure the Redis settings in the Core Middleware Helm Chart, please follow these steps:
- First, we need to find the
Redis Sentinel Kubernetes Service
, which typically has the patternrfs-<NAME>
:
$ kubectl get service --namespace <YOUR_NAMESPACE> --selector "app.kubernetes.io/component=sentinel"
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rfs-session-storage ClusterIP 10.105.173.172 <none> 26379/TCP 16m
So in this example the name of the Sentinel service is rfs-session-storage
with port 26379
, we will need this later on.
- Open the Helm chart's
values.yaml
file and search for theredis
section:
redis:
# Enabled by default
enabled: true
# The Redis operation mode
mode: sentinel
# The FQDN of the Sentinel Kubernetes Service from above,
# which has the pattern: <SENTINEL_SERVICE_NAME>.<YOUR_NAMESPACE>.svc.cluster.local
hosts: ["rfs-session-storage.<YOUR_NAMESPACE>.svc.cluster.local:26379"]
# The name of the Sentinel master set, the default is most often "mymaster".
sentinelMasterId: "mymaster"
Note: Since Redis is mandatory for the Open-Xchange Middleware, we spawn an internal
redis
-standalone instance ifhosts
is empty or null. This is fine for test deployments, but not for production environments.
- Verify Health Check Status:
$ kubectl exec --stdin --tty <RELEASE_NAME>-core-mw-node-0 --namespace <YOUR_NAMESPACE> -- curl -v http://localhost:8009/health
Defaulted container "core-mw" out of: core-mw, init-middleware (init)
...
{
"status": "UP",
"checks": [
{
"name": "allPluginsLoaded",
"status": "UP"
},
...
{
"name": "redis",
"status": "UP"
}
],
"service": {
"name": "appsuite-middleware",
"version": "8.20.0",
"date": "2023-11-22T16:39:58,190+0100",
"timeZone": "Europe/Berlin",
"locale": "en",
"charset": "UTF-8"
}
}
Congratulations! Everything is up and running and in a healthy state.