GCP Cloud Run Revision Not Ready: What to Check First
Last updated on

GCP Cloud Run Revision Not Ready: What to Check First


When a Cloud Run revision stays not ready, the real issue usually is not that Cloud Run itself is down. More often the container failed to start correctly, the app never listened where Cloud Run expected, or the revision could not become healthy enough to receive traffic.

The short version: inspect revision logs and startup behavior first, confirm the container binds the expected port, then compare the failing revision with the last healthy one before changing scaling or cold-start settings.


Start by separating “slow to warm up” from “never becomes ready”

Teams often mix together two different incidents:

  • the service is healthy eventually, but the first request is slow
  • the revision never becomes healthy enough to serve traffic

Only the second one is a true “revision not ready” problem. If the service eventually comes up and just feels slow, GCP Cloud Run Cold Start is the better path.

What usually keeps a revision from becoming ready

1. The container is not listening on the expected port

Cloud Run expects the application to bind the provided port correctly. If the app listens on a different port or only on the wrong interface, readiness never completes.

2. Startup fails before the service can become healthy

Bad config, missing environment variables, missing secrets, and dependency failures can stop the process before traffic is ever allowed.

3. The main process exits too early

If the container starts and then terminates instead of serving requests, the revision will never stabilize.

This is conceptually similar to a container restart loop, even if the platform surface looks different.

4. Initialization is too slow or blocked

Heavy startup work, remote configuration fetches, long database handshakes, or waiting on another service can delay readiness long enough to look like a platform problem.

5. Resources or assumptions changed between revisions

Memory settings, startup command changes, secret mounts, environment drift, and dependency version changes can break a new revision even when the previous one was healthy.

A practical debugging order

1. Read revision logs before changing anything

Start with:

gcloud run revisions describe <revision> --region <region>
gcloud logging read 'resource.type="cloud_run_revision"' --limit 50
gcloud run services describe <service> --region <region>

These usually tell you whether the revision failed because of startup, port binding, permissions, or configuration drift.

2. Confirm the container binds the expected port

This is still one of the highest-signal checks. The app must listen where Cloud Run expects, and it must do so in a way the platform can route to.

If this assumption is wrong, the revision will never become ready no matter how many redeploys you try.

3. Compare the failing revision with the last healthy revision

Look for differences in:

  • command or entrypoint
  • env vars and secrets
  • memory and CPU settings
  • dependency or image changes
  • startup behavior and logs

This step is often faster than reading the entire deployment setup from scratch.

4. Decide whether the app exits, hangs, or waits forever

These three paths look similar from the top-level error but imply different fixes:

  • exits early because startup failed
  • stays alive but never reaches a ready state
  • waits on a missing dependency or remote call

You want to know which branch you are in before touching scaling knobs.

5. Only then consider startup latency and resource tuning

If config and port binding look correct, then check whether startup work is simply too heavy or dependency initialization is too slow.

If the revision is healthy eventually but slow, switch to the cold-start guide instead of treating it as readiness failure.

What to change after you find the pattern

If the app binds the wrong port or interface

Fix the application startup or container configuration so it listens on the expected port and address.

If config drift broke startup

Correct environment variables, secrets, mounted resources, or command changes by comparing with the last healthy revision.

If the process exits during startup

Treat it like a startup failure first, not a Cloud Run scaling problem. The root cause is usually in the container behavior.

Compare with Docker Container Keeps Restarting if the same image also fails locally or in other runtimes.

If startup work is simply too heavy

Reduce initialization cost and review first-request paths. If the service does become healthy eventually, GCP Cloud Run Cold Start is often the better guide.

If startup fails while calling another GCP service

Permission and service-account issues are common in this stage, so compare with GCP Permission Denied.

A useful incident checklist

When a Cloud Run revision stays not ready, use this order:

  1. read revision logs and conditions
  2. confirm the app binds the expected port
  3. compare the failing revision with the last healthy one
  4. decide whether the app exits, hangs, or waits on dependencies
  5. only then look at startup latency and resource tuning

FAQ

Q. Is a not-ready revision always a port problem?

No. Port mismatch is common, but bad config, missing permissions, early process exit, and dependency waits can produce the same symptom.

Q. What is the fastest first step?

Read the failing revision’s startup logs and confirm port binding behavior.

Q. Why did only the new revision fail when the previous one worked?

Because even small differences in config, image contents, env vars, or startup logic can break readiness.

Q. How do I know this is cold start instead?

If the revision eventually becomes healthy and only first requests are slow, that is usually a cold-start or startup-cost issue rather than a readiness failure.

Sources:

Start Here

Continue with the core guides that pull steady search traffic.