This section of pages contains content that describes the main concepts that Redis Cloud Pro is built around.
While it is a design decision that Redis is a (mostly) single-threaded process and this does keep it extremely performant yet simple, there are times when clustering is advised. Redis Cloud Pro employs our Redis Enterprise technology to scale Redis databases on your behalf. A Redis Cloud Pro cluster is a set of managed Redis processes and cloud instances, with each process managing a subset of the database's keyspace. This approach overcomes scaling challenges via horizontal scaling using multiple cores and multiple instance's resources.
For each database, you can choose from these six supported data eviction policies: Options Description allkeys-lru Evicts the least recently used (LRU) keys out of all keys in the database allkeys-random Randomly evicts keys out of all keys in the database volatile-lru (default) Evicts the least recently used (LRU) keys out of keys with an "expire" field set volatile-random Randomly evicts keys with an "expire" field set volatile-ttl Evicts the shortest time-to-live and least recently used keys out of keys with an "expire" field set no eviction Returns error if memory limit has been reached when trying to insert more data One mechanism to avoid this, but still keep performance is to use Redis on Flash.
Redis Cloud Pro supports persisting your data to disk on a per-database basis and in multiple ways. Unlike a few cloud provider's Redis offerings, Redis Cloud Pro has two options for persistence, Append Only File (AOF) and Snapshot (RDB), and in addition, data-persistence is always performed over a persistent storage that is attached to the cloud instance (e.g. AWS EBS). This makes sure that there is no data lost in case of a node failure event, as the new cloud instance will be attached to the existing persistent storage volume.