Docker No Space Left on Device: What to Check First
Last updated on

Docker No Space Left on Device: What to Check First


When Docker reports no space left on device, the immediate failure might happen during a build, pull, write, or container start, but the real problem is usually accumulated storage pressure across images, cache, logs, or volumes.

The short version: do not start by pruning everything. First identify which Docker storage area is growing, because deleting the wrong class of data can create new outages or wipe useful state.


Start by separating host disk pressure from container-local disk assumptions

Two different problems are often mixed together:

  • the Docker host is running out of disk across images, cache, logs, or volumes
  • the application inside a container is filling its writable layer or mounted path

The first one needs host-level Docker cleanup. The second one often needs application or mount-path changes. If you do not separate them, cleanup becomes guesswork.

What usually consumes the space

1. Old images and dangling layers accumulate

Frequent rebuilds and pulls leave old tags and unused layers behind. Over time they quietly consume a large amount of disk.

2. Build cache keeps growing

BuildKit caches can become surprisingly large on active CI hosts and developer machines.

This is especially visible on hosts that build often but rarely prune.

3. Container logs grow without rotation

Verbose applications can fill JSON log files on the host much faster than teams expect.

This is one of the most common hidden causes because the application itself may still appear healthy.

4. Volumes retain historical data

Named volumes may keep databases, uploads, temporary artifacts, or old working state long after the original container is gone.

5. The image itself is unnecessarily heavy

If the runtime layers are oversized, every pull and every new host inherits the storage burden.

That is not always the immediate cause of disk exhaustion, but it makes the overall situation worse.

A practical debugging order

1. Start with Docker storage summary

The fastest first look is:

docker system df
docker ps -a
docker volume ls

This gives you a rough split across images, containers, local volumes, and build cache. You do not need perfect accounting yet. You only need to know which storage class is clearly growing.

2. Check whether stopped containers or old images dominate

If the host has many unused containers or old image tags, cleanup can help quickly. But make sure they are really disposable before removing them.

This matters more on shared hosts, CI runners, and long-lived developer machines.

3. Inspect large logs and write-heavy containers

If images do not explain the growth, container logs often do. A single noisy service can create huge local files while the application team keeps looking elsewhere.

4. Review volumes before touching them

Volumes are where accidental data loss risk increases. If a database, upload store, or stateful workload uses a named volume, do not remove it until you know what lives there and whether it is recoverable.

5. Only prune the storage class that is responsible

Once you know whether the problem is images, build cache, logs, or volumes, clean that area deliberately instead of using a broad command out of panic.

If the image itself is oversized, continue with Docker Image Too Large.

What to change after you find the growth source

If unused images and layers are the main problem

Clean old tags and dangling layers on a schedule, especially on CI runners and shared build hosts.

If build cache is the main problem

Review cache retention habits and build frequency. On active builder hosts, cache growth is operationally normal unless you manage it intentionally.

If logs are filling the disk

Reduce unnecessary log volume and make sure local log rotation or driver settings fit the workload.

If volumes are the main driver

Audit which services still need the data, move large state to better-managed storage when appropriate, and avoid leaving old named volumes behind indefinitely.

If the application fills writable paths rapidly

Look at temp-file behavior, export jobs, local uploads, and background tasks that keep writing to the container filesystem or a mounted directory.

A safe incident checklist

When Docker hits no space left on device, use this order:

  1. confirm whether the host or the container-local filesystem is actually full
  2. run docker system df
  3. inspect old images, stopped containers, and cache growth
  4. check container logs and named volumes
  5. prune only the category you understand
  6. fix the recurring growth source so the incident does not repeat

FAQ

Q. Is docker system prune -a always the right answer?

No. It can remove useful cache or artifacts, and on the wrong host it may be too destructive.

Q. What is the fastest first step?

Run docker system df before cleanup so you know which storage class is actually responsible.

Q. Why does this keep happening on CI hosts?

Because frequent builds, repeated pulls, and persistent cache use will keep growing unless cleanup is planned.

Q. If I free disk and the container starts again, am I done?

Not really. You still need to identify what kept growing or the incident will return.

Sources:

Start Here

Continue with the core guides that pull steady search traffic.