When RabbitMQ messages pile up in unacked, the broker is usually telling you something useful: the messages were delivered to consumers, but the consumers have not acknowledged them yet.
The short version: first confirm acknowledgement mode, then compare handler latency and prefetch before blaming the broker or the queue itself.
This guide explains how to narrow the incident quickly instead of treating unacked as a vague RabbitMQ failure.
Quick Answer
If messages are piling up in unacked, RabbitMQ usually already delivered them and is still waiting for consumer acknowledgement.
That means the fastest path is to check acknowledgement mode, prefetch size, handler latency, and whether the ack code path is actually reached. In most cases, the problem is around acknowledgement completion, not publishing.
What to Check First
Use this order first:
- compare
messages_readyversusmessages_unacknowledged - confirm whether acknowledgements are manual
- find where
ackhappens in consumer code - review prefetch and in-flight consumer window
- compare handler latency with downstream dependency timing
If all five look normal, then connection churn or requeue loops become stronger suspects.
What unacked actually means
RabbitMQ message states matter here:
- ready for delivery
- delivered but not yet acknowledged
unacked is the second state. The broker already handed the messages to a consumer and is still waiting for acknowledgement.
That means this is usually a consumer-side or workflow-side problem, not a publishing problem.
Start with acknowledgement mode
RabbitMQ supports automatic and manual acknowledgement modes.
With manual acknowledgements, the consumer must explicitly acknowledge deliveries. If it does not, the broker keeps those messages in the unacknowledged state.
That makes the first questions very simple:
- are acknowledgements manual?
- where does
ackhappen in code? - can that code path fail or never execute?
Why prefetch matters immediately
RabbitMQ documents prefetch as the limit on unacknowledged deliveries allowed in flight.
That means a high prefetch can produce a large unacked count even when the system is technically still working.
Typical patterns include:
- prefetch too high for consumer speed
- long-running handlers holding deliveries too long
- multiple consumers each taking a large in-flight window
- weak QoS defaults that allow too much outstanding work
A useful mental model for unacked growth
unacked is often where hidden backlog goes.
If handlers are slow or acknowledgements are delayed, the queue may not look enormous in messages_ready because work has already been handed out. That makes the system look healthier than it is.
So a growing unacked count often means:
- consumers are receiving work
- the work is not finishing quickly enough
- the acknowledgement loop is the real pressure point
unacked versus ready
| Queue state pattern | What it usually means | Better next step |
|---|---|---|
High ready, low unacked | Delivery is not keeping up | Check consumers and routing first |
Low ready, high unacked | Work is delivered but not finishing | Check ack path, handler latency, and prefetch |
High ready, high unacked | End-to-end backlog is growing | Check both consumer throughput and queue pressure |
Normal ready, rising unacked after deploy | Consumer behavior changed | Review code path, retry loop, and QoS settings |
Common root causes
1. The consumer is slow, not broken
The handler works, but processing takes longer than expected.
2. Ack code is missing or never reached
Exceptions, timeouts, or retry wrappers can skip the ack path.
3. Prefetch is too high
Too many messages are delivered before the consumer finishes earlier work.
4. Consumer connections are unstable
Connection or channel churn can create confusing delivery and requeue patterns.
Do not confuse unacked with ready
Queue length limits are based on ready messages. Unacknowledged messages are a different state.
That means you can have:
- moderate ready count
- very high
unacked - overloaded consumers
and still think the queue “looks fine” unless you inspect both states.
A practical debugging order
1. Inspect ready versus unacked
This tells you whether the problem starts before delivery or after delivery.
2. Check consumer logs for missing ack paths
The broker often looks wrong when the ack path in application code is the real failure.
3. Review prefetch
High prefetch can make slow consumers hold far more work than they should.
4. Check handler latency and downstream dependencies
Slow downstream systems often turn into unacked buildup before anything else looks broken.
5. Verify whether failures requeue endlessly
This can turn one bad path into persistent churn.
Quick commands to ground the investigation
rabbitmqctl list_queues name messages_ready messages_unacknowledged consumers
rabbitmqctl list_channels connection name messages_unacknowledged
rabbitmqctl list_consumers
Use these to confirm whether deliveries are stuck with active consumers, inactive consumers, or channels holding too many in-flight messages.
A practical mindset for unacked incidents
The real question is usually not “why did RabbitMQ stop moving messages?” but “what is preventing acknowledgement from completing?”
That shift helps narrow the investigation much faster. In practice, unacked tends to grow when:
- handler work is slower than expected
- the acknowledgement path is skipped
- too much work is allowed in flight at once
- failures keep requeueing the same work
If you frame the incident around acknowledgement completion rather than broker failure, the right logs and metrics usually become obvious.
Bottom Line
unacked is usually not a broker mystery. It is a sign that the broker handed work out and the acknowledgement path is not completing fast enough.
In practice, start with ack mode, prefetch, and handler latency. Once those are clear, most unacked incidents stop looking vague very quickly.
FAQ
Q. Does unacked mean messages are lost?
No. It means they were delivered but not yet acknowledged.
Q. Is high unacked always bad?
Not always. It can be normal with in-flight work, but it becomes a problem when it grows without draining.
Q. What is the fastest first step?
Check acknowledgement mode and prefetch before restarting consumers.
Q. When should I suspect consumer code first?
As soon as the broker clearly delivered messages but they never leave the unacked state.
Read Next
- If the whole queue is growing rather than only
unacked, continue with RabbitMQ Queue Keeps Growing. - If prefetch is the next setting you need to verify, continue with RabbitMQ Prefetch Guide.
- If publishers also slow down under the same pressure, continue with RabbitMQ Connection Blocked.
Related Posts
- RabbitMQ Queue Keeps Growing
- RabbitMQ Prefetch Guide
- RabbitMQ Connection Blocked
- RabbitMQ Consumers Not Receiving Messages
Sources:
- https://www.rabbitmq.com/docs/queues
- https://www.rabbitmq.com/docs/confirms
- https://www.rabbitmq.com/docs/4.0/consumer-prefetch
While AdSense review is pending, related guides are shown instead of ads.
Start Here
Continue with the core guides that pull steady search traffic.
- Middleware Troubleshooting Guide: Redis vs RabbitMQ vs Kafka A practical middleware troubleshooting guide for developers covering when to reach for Redis, RabbitMQ, or Kafka symptoms first, and which problem patterns usually belong to each tool.
- Kubernetes CrashLoopBackOff: What to Check First A practical Kubernetes CrashLoopBackOff troubleshooting guide covering startup failures, probe issues, config mistakes, and what to inspect first.
- Kafka Consumer Lag Increasing: Troubleshooting Guide A practical Kafka consumer lag troubleshooting guide covering what lag usually means, which consumer metrics to check first, and how poll timing, processing speed, and fetch patterns affect lag.
- Kafka Rebalancing Too Often: Common Causes and Fixes A practical Kafka troubleshooting guide covering why consumer groups rebalance too often, what poll timing and group protocol settings matter, and how to stop rebalances from interrupting useful work.
- Docker Container Keeps Restarting: What to Check First A practical Docker restart-loop troubleshooting guide covering exit codes, command failures, environment mistakes, health checks, and what to inspect first.
While AdSense review is pending, related guides are shown instead of ads.