Getting Started with Redis Enterprise Software using Kubernetes
Kubernetes provides simpler orchestration with containers and has been widely adopted. It is simple to get a Redis Enterprise cluster on Kubernetes with the Redis Enterprise Operator deployment.
Logs Each redis-enterprise container stores its logs under /var/opt/redislabs/log. When using persistent storage this path is automatically mounted to the redis-enterprise-storage volume. This volume can easily be accessed by a sidecar, i.e. a container residing on the same pod. For example, in the REC (Redis Enterprise Cluster) spec you can add a sidecar container, such as a busybox, and mount the logs to there: sideContainersSpec: - name: busybox image: busybox args: - /bin/sh - -c - while true; do echo "hello"; sleep 1; done volumeMounts: - name: redis-enterprise-storage mountPath: /home/logs subPath: logs Now the logs can be accessed from in the side card.
Redis Labs bases its Kubernetes architecture on several vital concepts. Layered architecture Kubernetes is an excellent orchestration tool, but it was not designed to deal with all the nuances associated with operating Redis Enterprise. Therefore, it can fail to react accurately to internal Redis Enterprise edge cases or failure conditions. Also, Kubernetes orchestration runs outside the Redis Cluster deployment and may fail to trigger failover events, for example, in split network scenarios.
To deploy a Redis Enterprise Cluster with Redis Enterprise Operator the spec should include a persistentSpec section, in the redis-enterprise-cluster.yaml file: spec: nodes: 3 persistentSpec: enabled: true storageClassName: "standard" volumeSize: "23Gi” #optional Persistence storage is a requirement for this deployment type. Volume Size volumeSize is an optional definition. By default, if the definition is omitted, Operator allocates five times (5x) the amount of memory (RAM) defined for nodes (see example below), which is the recommended persistent storage size as described in the Hardware requirements article.
Overview When a Redis Enterprise cluster loses contact with more than half of its nodes either because of failed nodes or network split, the cluster stops responding to client connections. When this happens, you must recover the cluster to restore the connections. You can also perform cluster recovery to reset cluster nodes, to troubleshoot issues, or in a case of active/passive failover. The cluster recovery for Kubernetes automates these recovery steps:
The Redis Enterprise Operator is the fastest, most efficient way to deploy and maintain a Redis Enterprise Cluster in Kubernetes. What is an Operator? An Operator is a Kubernetes custom controller which extends the native K8s API. Operators were developed to handle sophisticated, stateful applications that the default K8s controllers aren’t able to handle. While stock Kubernetes controllers—for example, StatefulSets—are ideal for deploying, maintaining and scaling simple stateless applications, they are not equipped to handle access to stateful resources, upgrade, resize and backup of more elaborate, clustered applications such as databases.
The following article reviews the mechanism and methods available for sizing and scaling a Redis Enterprise Cluster deployment. For minimum and recommended sizing, always follow the sizing guidelines detailed in the Redis Enterprise Hardware Requirements. Sizing and scaling cluster nodes Setting the number of cluster nodes Define the number of cluster nodes in redis-enterprise-cluster.yaml file. spec: nodes: 3 The number of nodes in the cluster must be an uneven number equal to or greater than 3.
Redis Labs implements rolling updates for software upgrades in Kubernetes deployments. Rolling updates allow deployments’ updates to take place with zero downtime by incrementally updating Pods’ Redis Enterprise Cluster instances with new ones. The following illustrations depict how a rolling update occurs: Each hexagon represents a node Each box represents a Pod The Pods are updated one by one, in the diagram starting from left to right. Upgrade progresses to the next Pod only once the current Pod has completed the upgrade process successfully.