Concepts behind Redis Enterprise Cloud
This section of pages contains content that describes the main concepts and architecture that you need to know about for Redis Enterprise Cloud.
While it is a design decision that Redis is a (mostly) single-threaded process and this does keep it extremely performant yet simple, there are times when clustering is advised. For Redis Enterprise Cloud, it is advantageous and efficient to employ clustering to scale Redis databases when: The dataset is big enough that it would benefit from using the RAM resources of more than one server. We recommend sharding a dataset once it reaches the size of 25 GB (50 GB for RoF).
There are six supported data eviction policies to choose from for each database. They are: Options Description allkeys-lru Evicts the least recently used keys out of all keys allkeys-random Randomly evicts keys out of all keys volatile-lru (default) Evicts the least recently used keys out of keys with an "expire" field set volatile-random Randomly evicts keys with an "expire" field set volatile-ttl Evicts the shortest time-to-live and least recently used keys out of keys with an "expire" field set no eviction Returns error if memory limit has been reached when trying to insert more data
Redis Enterprise Cloud (RC) supports persisting your data to disk on a per-database basis and in multiple ways. Unlike a few cloud provider's Redis offerings, RC has two options for persistence, Append Only File (AOF) and Snapshot (RDB), and in addition, data-persistence is always performed over a persistent storage that is attached to the cloud instance (e.g. AWS EBS). This makes sure that no data is lost in case of a node failure event, as the new cloud instance will be attached to the existing persistent storage volume.