Guide to Choosing Effective Redis Eviction Policies
Redis is renowned for its speed, largely due to its in-memory nature. However, when your dataset grows larger than the available physical memory, Redis must proactively remove older or less critical data to make room for new entries. This process is managed through Eviction Policies, which are crucial for maintaining performance and ensuring your cache operates optimally. Choosing the correct policy directly impacts cache hit rates, latency, and memory utilization.
This guide explores the various built-in Redis eviction policies, explaining how each one functions and providing practical advice on selecting the most effective strategy for different application workloads, ranging from pure caching scenarios to time-series data management.
Understanding Redis Eviction and maxmemory
Eviction policies only come into play when Redis memory usage exceeds the limit set by the maxmemory configuration directive. If maxmemory is not set (or set to 0), Redis will use all available memory, and no eviction will occur, potentially leading to system instability if the host machine runs out of RAM.
To enable eviction, you must configure maxmemory in your redis.conf file or via the CONFIG SET command:
# Set maxmemory to 2GB
CONFIG SET maxmemory 2gb
Once memory is constrained, Redis uses the configured eviction policy to decide which keys to discard when a write command requires more memory.
The Built-in Redis Eviction Policies
Redis offers several distinct policies. These are configured using the maxmemory-policy directive. The policies generally fall into two categories: those based on Least Recently Used (LRU) or Least Frequently Used (LFU), and those targeting keys with Time To Live (TTL) set.
1. Policies Without TTL Requirements
These policies operate on all keys in the database, regardless of whether they have an expiration time set.
noeviction
This is the default policy. When the memory limit is reached, Redis rejects write commands (like SET, LPUSH, etc.), returning an error to the client. Reads (GET) are still allowed. This is often suitable for mission-critical data where data loss is unacceptable, but it can lead to application errors under high write pressure.
allkeys-lru
Evicts the least recently used keys among all keys in the database until the memory usage is below the maxmemory limit. This is the standard choice for a general-purpose cache where all data items are equally cacheable.
allkeys-lfu
Evicts the least frequently used keys among all keys. LFU prioritizes keeping keys that are accessed often, even if they haven't been accessed recently. This is effective when access patterns are volatile, but highly popular items might stay resident indefinitely.
allkeys-random
Evicts keys chosen randomly until the memory limit is satisfied. This is rarely recommended for production caches unless the data access pattern is completely uniform and unpredictable.
2. Policies Requiring TTL (Volatile Keys)
These policies only consider keys that have an explicit expiration time (EXPIRE or SETEX) set. They ignore non-expiring keys when performing eviction.
volatile-lru
Evicts the least recently used keys among those that have an expiration set.
volatile-lfu
Evicts the least frequently used keys among those that have an expiration set.
volatile-random
Evicts a random key among those that have an expiration set.
volatile-ttl
Evicts the key with the shortest remaining time to live (TTL) first. This is ideal for time-sensitive data, like session tokens or temporary application state, ensuring older, soon-to-expire data is cleaned up first.
Selecting the Right Policy for Your Workload
The optimal eviction policy depends entirely on what you are caching and how your application uses the data.
| Workload Type | Recommended Policy | Rationale |
|---|---|---|
| General Purpose Cache (Most common) | allkeys-lru |
Assumes older, unused data should be removed first, regardless of TTL. Simple and highly effective. |
| Time-Sensitive Data (e.g., tokens, short-lived sessions) | volatile-ttl |
Guarantees that keys nearing expiration are cleaned up efficiently before they actually expire. |
| Hot Data Cache (High access skew) | allkeys-lfu or volatile-lfu |
Protects frequently accessed items from being evicted due to recent inactivity. |
| Mandatory Data Retention (No loss allowed) | noeviction |
Prevents data loss by erroring out writes, requiring manual intervention or upstream application handling. |
| Mixed Workloads (Some data expires, some doesn't) | volatile-lru or volatile-ttl |
If your non-expiring keys are essential, use a volatile policy to protect them by only evicting explicity expiring keys. |
Practical Example: Implementing a Session Store
If Redis is used primarily to store user sessions, you would typically set an explicit TTL on every session key (e.g., 30 minutes) and use a policy that respects TTLs. volatile-ttl is often superior here because if a session is heavily used, it shouldn't be evicted simply because it's slightly older than another session that hasn't been touched in weeks.
# 1. Set maxmemory (e.g., 10GB)
CONFIG SET maxmemory 10gb
# 2. Choose the policy targeting time-to-live data
CONFIG SET maxmemory-policy volatile-ttl
Practical Example: Implementing an HTTP Cache
For caching full HTTP responses (which might not always have a TTL set), you want to keep the data that is accessed most often, even if that data has been sitting untouched for a few hours. allkeys-lru or allkeys-lfu are ideal.
# Use LFU to retain truly 'hot' objects, regardless of their creation time
CONFIG SET maxmemory-policy allkeys-lfu
Monitoring and Tuning
After selecting a policy, continuous monitoring is essential. You should track the following metrics via the INFO command:
used_memory: How close you are to themaxmemorylimit.evicted_keys: The rate at which Redis is discarding data. A constantly high eviction rate indicates that yourmaxmemorysetting is too low for your workload, or your eviction policy is overly aggressive.- Application Cache Hit Rate: The ultimate measure of success. If your hit rate drops when memory pressure increases, your policy selection or
maxmemorylimit needs adjustment.
Best Practice: When tuning
maxmemory, always leave a safety buffer (e.g., 10-20% free memory) to account for replication buffering, command buffering, and potential overhead from Redis's internal data structures.
Conclusion
Redis eviction policies provide fine-grained control over how your cache behaves under memory pressure. There is no single 'best' policy; the choice between LRU, LFU, or TTL-based eviction must align precisely with your data access patterns and business requirements. By carefully selecting the appropriate policy—such as allkeys-lru for general caching or volatile-ttl for session stores—you can maximize cache efficiency and ensure robust performance for your high-speed data operations.