Redis Memory Usage High: What to Check First
Dev
Last updated on

Redis Memory Usage High: What to Check First


When Redis memory usage keeps growing, the first job is to decide whether the problem is oversized keys, missing TTLs, queue-like misuse, or a normal workload that simply has no memory budget. Teams lose time when they jump straight to server tuning even though the real issue is often data that should already have expired or structures that quietly grew past the original plan.

The short version: start with server memory metrics, inspect a few real keys, and separate raw data growth from data that should already have expired before you change eviction or server settings.


Start by separating more data than planned from data that should already be gone

Those two incidents often look identical on a dashboard.

In one case, Redis is doing exactly what the application asked and the workload simply outgrew the original plan. In the other case, keys that should have expired, rotated, or been trimmed are quietly piling up.

That distinction matters because one fix belongs in capacity planning and data modeling, while the other belongs in TTL and retention logic.

Begin with server-level memory, not assumptions

Start with:

redis-cli INFO memory
redis-cli MEMORY STATS

INFO memory gives the fastest operational snapshot. MEMORY STATS adds allocator and overhead detail that helps you tell raw data growth from overhead and fragmentation effects.

This is the best first step because many teams jump straight to app code without confirming what Redis itself thinks is happening.

Check whether a few keys are doing most of the damage

After the server view, inspect actual keys.

Use:

redis-cli --bigkeys
redis-cli MEMORY USAGE your:key

--bigkeys helps surface likely offenders. MEMORY USAGE gives you a direct estimate for a specific key.

If one or two suspicious keys dominate usage, move next to Redis Big Keys, because at that point the issue is often data-shape design rather than generic Redis tuning.

Check whether the keys should have expired already

Memory incidents often turn out to be TTL incidents in disguise.

For cache, session, rate-limit, or temporary keys, sample real values with:

redis-cli TTL your:key

If many keys return -1, the issue may be missing expiration rather than raw workload growth.

Look at structure shape, not only key count

Redis memory problems often come from shape rather than pure volume.

Common patterns include:

  • a hash that keeps accumulating fields
  • a list used like an unbounded queue
  • a sorted set storing historical data forever
  • repeated large values under short-lived feature names

In these cases the problem is not Redis is leaking. It is the application keeps storing more than intended.

Confirm the max-memory and eviction story actually matches the app’s expectations

Ask:

  • is maxmemory configured?
  • is the eviction policy intentional?
  • does the application expect eviction, TTL expiration, or both?

If the application assumes Redis will automatically drop cache keys but the server is configured differently, the incident is easy to misread.

Common causes of sudden growth

1. TTLs disappeared

Overwrite paths such as SET can remove expiration unless the app restores it intentionally.

2. One feature created unbounded data

Feeds, leaderboards, session indexes, queues, and activity buffers commonly grow without a cleanup rule.

3. A few large keys dominate total memory

This is why key sampling matters more than total key count alone.

4. Consumers fell behind

When Redis is used like a buffer, backlog growth is often expected rather than mysterious.

A practical debugging order

  1. inspect INFO memory
  2. inspect MEMORY STATS
  3. run --bigkeys and sample suspicious keys with MEMORY USAGE
  4. verify TTL on cache-like keys
  5. confirm whether the data model has a retention rule

That order usually gets you to the real class of problem faster than tuning memory settings first.

A useful first-pass branch

Use this shortcut when memory keeps rising:

  • --bigkeys shows a few outliers: think big keys first
  • key sizes are moderate but TTL is missing: think expiration drift first
  • queue-like structures keep growing: think backlog or retention first
  • memory usage matches valid data growth: think capacity budget first

That branch often narrows the incident faster than debating eviction settings.

FAQ

Q. Is high Redis memory usage always a leak?

No. It is often missing expiration, oversized keys, or an application data-model problem.

Q. What is the fastest command to inspect one key?

MEMORY USAGE key is the fastest direct check for a known key.

Q. What should I check if memory keeps growing overnight?

Check TTL behavior, queue-like backlog growth, and whether one feature keeps accumulating data with no retention rule.

Q. When should I suspect big keys first?

When a few sampled keys are already much larger than expected or --bigkeys surfaces obvious outliers.

Sources:

Start Here

Continue with the core guides that pull steady search traffic.