When Redis memory usage keeps growing, the first job is to decide whether the problem is oversized keys, missing TTLs, queue-like misuse, or a normal workload that simply has no memory budget. Teams lose time when they jump straight to server tuning even though the real issue is often data that should already have expired or structures that quietly grew past the original plan.
The short version: start with server memory metrics, inspect a few real keys, and separate raw data growth from data that should already have expired before you change eviction or server settings.
Start by separating more data than planned from data that should already be gone
Those two incidents often look identical on a dashboard.
In one case, Redis is doing exactly what the application asked and the workload simply outgrew the original plan. In the other case, keys that should have expired, rotated, or been trimmed are quietly piling up.
That distinction matters because one fix belongs in capacity planning and data modeling, while the other belongs in TTL and retention logic.
Begin with server-level memory, not assumptions
Start with:
redis-cli INFO memory
redis-cli MEMORY STATS
INFO memory gives the fastest operational snapshot. MEMORY STATS adds allocator and overhead detail that helps you tell raw data growth from overhead and fragmentation effects.
This is the best first step because many teams jump straight to app code without confirming what Redis itself thinks is happening.
Check whether a few keys are doing most of the damage
After the server view, inspect actual keys.
Use:
redis-cli --bigkeys
redis-cli MEMORY USAGE your:key
--bigkeys helps surface likely offenders. MEMORY USAGE gives you a direct estimate for a specific key.
If one or two suspicious keys dominate usage, move next to Redis Big Keys, because at that point the issue is often data-shape design rather than generic Redis tuning.
Check whether the keys should have expired already
Memory incidents often turn out to be TTL incidents in disguise.
For cache, session, rate-limit, or temporary keys, sample real values with:
redis-cli TTL your:key
If many keys return -1, the issue may be missing expiration rather than raw workload growth.
Look at structure shape, not only key count
Redis memory problems often come from shape rather than pure volume.
Common patterns include:
- a hash that keeps accumulating fields
- a list used like an unbounded queue
- a sorted set storing historical data forever
- repeated large values under short-lived feature names
In these cases the problem is not Redis is leaking. It is the application keeps storing more than intended.
Confirm the max-memory and eviction story actually matches the app’s expectations
Ask:
- is
maxmemoryconfigured? - is the eviction policy intentional?
- does the application expect eviction, TTL expiration, or both?
If the application assumes Redis will automatically drop cache keys but the server is configured differently, the incident is easy to misread.
Common causes of sudden growth
1. TTLs disappeared
Overwrite paths such as SET can remove expiration unless the app restores it intentionally.
2. One feature created unbounded data
Feeds, leaderboards, session indexes, queues, and activity buffers commonly grow without a cleanup rule.
3. A few large keys dominate total memory
This is why key sampling matters more than total key count alone.
4. Consumers fell behind
When Redis is used like a buffer, backlog growth is often expected rather than mysterious.
A practical debugging order
- inspect
INFO memory - inspect
MEMORY STATS - run
--bigkeysand sample suspicious keys withMEMORY USAGE - verify TTL on cache-like keys
- confirm whether the data model has a retention rule
That order usually gets you to the real class of problem faster than tuning memory settings first.
A useful first-pass branch
Use this shortcut when memory keeps rising:
--bigkeysshows a few outliers: think big keys first- key sizes are moderate but TTL is missing: think expiration drift first
- queue-like structures keep growing: think backlog or retention first
- memory usage matches valid data growth: think capacity budget first
That branch often narrows the incident faster than debating eviction settings.
FAQ
Q. Is high Redis memory usage always a leak?
No. It is often missing expiration, oversized keys, or an application data-model problem.
Q. What is the fastest command to inspect one key?
MEMORY USAGE key is the fastest direct check for a known key.
Q. What should I check if memory keeps growing overnight?
Check TTL behavior, queue-like backlog growth, and whether one feature keeps accumulating data with no retention rule.
Q. When should I suspect big keys first?
When a few sampled keys are already much larger than expected or --bigkeys surfaces obvious outliers.
Read Next
- If one or two oversized keys dominate memory, continue with Redis Big Keys.
- If the bigger symptom is latency or pauses rather than pure growth, continue with Redis Latency Spikes.
- If save and rewrite timing also matters, compare with Redis Persistence Latency.
Related Posts
Sources:
- https://redis.io/docs/latest/operate/oss_and_stack/management/optimization/memory-optimization/
- https://redis.io/docs/latest/commands/memory-usage/
- https://redis.io/docs/latest/commands/memory-stats/
While AdSense review is pending, related guides are shown instead of ads.
Start Here
Continue with the core guides that pull steady search traffic.
- Middleware Troubleshooting Guide: Redis vs RabbitMQ vs Kafka A practical middleware troubleshooting guide for developers covering when to reach for Redis, RabbitMQ, or Kafka symptoms first, and which problem patterns usually belong to each tool.
- Kubernetes CrashLoopBackOff: What to Check First A practical Kubernetes CrashLoopBackOff troubleshooting guide covering startup failures, probe issues, config mistakes, and what to inspect first.
- Kafka Consumer Lag Increasing: Troubleshooting Guide A practical Kafka consumer lag troubleshooting guide covering what lag usually means, which consumer metrics to check first, and how poll timing, processing speed, and fetch patterns affect lag.
- Kafka Rebalancing Too Often: Common Causes and Fixes A practical Kafka troubleshooting guide covering why consumer groups rebalance too often, what poll timing and group protocol settings matter, and how to stop rebalances from interrupting useful work.
- Docker Container Keeps Restarting: What to Check First A practical Docker restart-loop troubleshooting guide covering exit codes, command failures, environment mistakes, health checks, and what to inspect first.
While AdSense review is pending, related guides are shown instead of ads.