When Redis returns OOM command not allowed when used memory > 'maxmemory', the most important thing to understand is that this usually means policy and pressure, not process death. The message is alarming, but many incidents are really about how Redis is allowed to react under pressure rather than whether the host has completely run out of RAM.
The short version: first confirm maxmemory, then confirm maxmemory-policy, and then inspect whether the rejected command required new memory that the current policy could not free.
What this OOM message usually means
Redis maxmemory is not simply a number that guarantees usage can never briefly cross the line. Redis can exceed the threshold transiently and then try to apply eviction according to the active policy.
The OOM-style error usually appears when all of these are true:
- memory pressure exists
- the command needs additional memory
- the current policy cannot or will not free suitable keys for that write path
That means the problem is often closer to eviction behavior and workload shape than to the server ran out of host RAM.
Start with maxmemory and policy
Your first checks should be:
- what is
maxmemory? - what is
maxmemory-policy? - is the current workload expected to rely on eviction?
If the policy is noeviction, write-like commands that require more memory will fail instead of evicting keys.
If the policy is one of the volatile-* variants, only keys with TTL are eligible for eviction. If the application assumed all cache-like keys could disappear, the incident can be surprising even though Redis is behaving exactly as configured.
Why policy changes the outcome
This error is much easier to reason about once you connect it to policy behavior.
Typical patterns:
noeviction: memory-growing writes failvolatile-*: only TTL-bearing keys are candidatesallkeys-*: all keys are eligible for eviction
If the application depends on old cache entries disappearing automatically but the configured policy does not allow that in practice, OOM errors become easy to trigger.
For the broader policy view, the companion Redis Eviction Policy Guide is the next read.
Common root causes
1. Memory pressure plus noeviction
The server reached the configured ceiling and Redis has no eviction path available for the incoming write.
2. The wrong keys are eligible for eviction
The application assumed Redis could free certain data, but the policy only permits eviction from a narrower key set.
3. Big keys or unexpectedly large writes
A single operation may need much more memory than expected, even when the system looked stable a moment earlier.
4. TTL assumptions are wrong
Keys that the team expected to expire may still exist, which means memory stays full and eviction opportunities stay limited.
5. Memory pressure is real, but host RAM is not the main story
The process may still be inside host limits while Redis is already enforcing its own configured ceiling.
A practical debugging order
- inspect current memory usage
- inspect
maxmemory - inspect
maxmemory-policy - inspect whether the failing command allocates or grows data
- confirm whether expected cache keys still have TTL
- check whether big keys are making the failure sharper than expected
That order usually tells you whether you have a capacity problem, a policy mismatch, or an application write-pattern problem.
Quick commands to ground the investigation
redis-cli INFO memory
redis-cli CONFIG GET maxmemory
redis-cli CONFIG GET maxmemory-policy
redis-cli --bigkeys
Use these to confirm whether Redis hit the configured ceiling, whether eviction is possible under the current policy, and whether unusually large keys are amplifying memory pressure.
A quick branch that helps
When this error appears, start with this split:
- policy is
noeviction: think expected write failure under pressure first - policy is
volatile-*but TTL is missing: think eviction candidate mismatch first - policy allows eviction but large keys dominate: think workload shape first
- host memory looks fine but Redis still rejects writes: think Redis ceiling, not host crash
That branch usually makes the incident feel much less mysterious.
Symptom shortcuts
- Start here if Redis returns
OOM command not allowedwhile the app is still sending writes. - If keys are disappearing instead of writes failing, compare the incident with eviction policy first.
- If memory remains high even without this specific error, compare with Redis Memory Usage High.
FAQ
Q. Does this error mean Redis crashed?
No. It often means Redis rejected a memory-growing command under the current policy.
Q. Is increasing memory always the fix?
No. Sometimes the real fix is changing policy, shrinking data shape, restoring TTL behavior, or reducing write bursts.
Q. Why do some writes fail while reads still work?
Because the error is usually triggered by commands that need more memory, not by every command.
Q. What should I check first?
Check maxmemory, maxmemory-policy, and whether the workload depends on eviction that is not actually available.
Read Next
- If the real issue is policy behavior, continue with Redis Eviction Policy Guide.
- If the real issue is missing expiration behavior, continue with Redis Keys Not Expiring.
- If the real issue is general memory pressure, continue with Redis Memory Usage High.
Related Posts
Sources:
- https://redis.io/docs/latest/develop/reference/eviction/
- https://redis.io/faq/doc/1jbxid5qq7/is-maxmemory-the-maximum-value-of-used-memory
While AdSense review is pending, related guides are shown instead of ads.
Start Here
Continue with the core guides that pull steady search traffic.
- Middleware Troubleshooting Guide: Redis vs RabbitMQ vs Kafka A practical middleware troubleshooting guide for developers covering when to reach for Redis, RabbitMQ, or Kafka symptoms first, and which problem patterns usually belong to each tool.
- Kubernetes CrashLoopBackOff: What to Check First A practical Kubernetes CrashLoopBackOff troubleshooting guide covering startup failures, probe issues, config mistakes, and what to inspect first.
- Kafka Consumer Lag Increasing: Troubleshooting Guide A practical Kafka consumer lag troubleshooting guide covering what lag usually means, which consumer metrics to check first, and how poll timing, processing speed, and fetch patterns affect lag.
- Kafka Rebalancing Too Often: Common Causes and Fixes A practical Kafka troubleshooting guide covering why consumer groups rebalance too often, what poll timing and group protocol settings matter, and how to stop rebalances from interrupting useful work.
- Docker Container Keeps Restarting: What to Check First A practical Docker restart-loop troubleshooting guide covering exit codes, command failures, environment mistakes, health checks, and what to inspect first.
While AdSense review is pending, related guides are shown instead of ads.