Compress Values

    Redis and clients are typically IO bound and the IO costs are typically at least 2 orders of magnitude in respect to the rest of the request/reply sequence. Redis by default does not compress any value that is stored in it, hence it becomes important to compress your data before storing in Redis.This will help in reducing the payload which will in return give you higher throughput, lower latency and higher savings in your cost.

    Use Smaller Keys

    Redis keys can play a devil in increasing the memory consumption for your Redis instances. In general, you should always prefer descriptive keys but if you have a large dataset having millions of keys then these large keys can eat a lot of your money. How to Convert to Smaller Keys In a well written application, switching to shorter keys usually involves updating a few constant strings in the application code.

    Switch to 32 Bits

    Redis gives you the following statistics for a 64-bit machine. An empty instance uses ~ 3MB of memory. 1 Million small Keys -> String Value pairs use ~ 85MB of memory. 1 Million Keys -> Hash value, representing an object with 5 fields, use ~ 160 MB of memory. 64-bit will have more memory available as compared to a 32-bit machine. But if you are sure that your data size will not exceed 3 GB then storing in 32 bits is a good option.

    Upgrade Redis Version

    Redis 4.0 is the latest version that has been launched. It contains various big improvements compared to the previous versions. It supports mixed RDB+AOF Format. Improvement in memory usage and performance. New Memory Command has been introduced. Active Memory Defragmentation. Faster Redis Cluster key creation. Trade Offs Redis 4.0 is still not a stable release but is a very battle tested release, so Redis 3.2 is a better pick for critical applications till Redis 4.

    Use Better Serializer

    Redis does not have any specific data type to store the serialized objects, they are stored as byte array in Redis. If we are using regular means of serializing our java,python and PHP objects, they can be of larger size which will impact the memory consumption and latency. Which Serializers to Use Instead of default serializer of your programming language (java serialzed objects, python pickle, PHP serialize etc), switch to a better library.

    Combine Smaller Strings to Hashes

    Strings data type has an overhead of about about 90 bytes on a 64 bit machine. In other words, calling set foo bar uses about 96 bytes, of which 90 bytes is overhead. You should use the String data type only if: The value is at least greater than 100 bytes You are storing encoded data in the string - JSON encoded or Protocol buffer You are using the string data type as an array or a bitset If you are not doing any of the above, then use Hashes.

    Switch from Set to Intset

    Sets that contain only integers are extremely efficient memory wise. If your set contains strings, try to use integers by mapping string identifiers to integers. You can either use enums in your programming language, or you can use a redis hash data structure to map values to integers. Once you switch to integers, Redis will use the IntSet encoding internally. This encoding is extremely memory efficient. By default, the value of set-max-intset-entries is 512, but you can set this value in redis.

    Switch to bloom filter or hyperloglog

    Unique items can be difficult to count. Usually this means storing every unique item then recalling this information somehow. With Redis, this can be accomplished by using a set and a single command, however both the storage and time complexity of this with very large sets is prohibitive. HyperLogLog provides a probabilistic alternative. If your set contains a very large number of elements, and you are only using the set for existence checks or to eliminate duplicates - then you will benefit by using a bloom filter.

    Shard Big Hashes to Small Hashes

    If you have a hash with large number of key, value pairs, and if each key, value pair is small enough - break it into smaller hashes to save memory. To shard a HASH table, we need to choose a method of partitioning our data. Hashes themselves have keys which can be used for partitioning the keys into different shards. The number of shards are determined by the total number of keys we want to store and the shard size.

    Convert Hashtable to Ziplist for Hashes

    Hashes have two types of encoding- HashTable and Ziplist. The decision of storing in which of the data structures in done based on the two configurations Redis provides - hash-max-ziplist-entries and hash-max-ziplist-values. By default the redis conf has these settings as: hash-max-ziplist-entries = 512 hash-max-ziplist-values = 64 So if any value for a key exceeds the two configurations it is stored automatically as a Hashtable instead of a Ziplist.

    Convert to a List Instead of Hash

    A Redis Hash stores field names and values. If you have thousands of small hash objects with similar field names, the memory used by field names adds up. To prevent this, consider using a list instead of a hash. The field names become indexes into the list. While this may save memory, you should only use this approach if you have thousands of hashes, and if each of those hashes have similar fields.

    Compress Field Names

    Redis Hash consists of Fields and their values. Like values, field name also consumes meory, so it is required to keep in mind while assigning field names. If you have a large number of hashes with similar field names, the memory adds up significantly. To reduce memory usage, you can use smaller field names. What do We Mean By Compress Field Names Referring to the previous example in convert hashes to list, we had a hash having user details.

    Avoid Dynamic Lua Script

    Refrain from generating dynamic scripts, which can cause your Lua cache to grow and get out of control. Memory is consumed as we have scripts loaded. The memory consumption are because of the following factors. Memory used by the server.lua_scripts dictionary holding original text memory used internally by Lua to keep the compiled byte-code. So If you have to use dynamic scripting, then just use plain EVAL, as there’s no point in loading them first.

    Enable Compression for List

    List is just a link list of arrays, where none of the arrays are compressed. By default, redis does not compress elements inside a list. However, if you use long lists, and mostly access elements from the head and tail only, then you can enable compression. We have two configurations: List-max-ziplist-size: 8kb(default) List-compression-depth: 0,1,2 (0 by default) A configuration change in redis.conf list-compression-depth=1 will help you achieve compression. What is compression-depth Compression depth is the number of list nodes from each end of the list to leave untouched before we start compressing inner nodes.

    Reclaim Expired Keys Memory Faster

    When you set an expiry on a key, redis does not expire it at that instant. Instead, it uses a randomized algorithm to find out keys that should be expired. Since this algorithm is random, there are chances that redis will not expire keys. This means that redis will consume memory to hold keys that have already expired. The moment the key is accessed, it will be deleted. If there are only a few keys that have expired and redis hasn't deleted them - it is fine.