When Python logs do not show up, the cause is usually not mysterious. Most incidents come from a small set of issues: the effective log level is higher than expected, no handler writes output anywhere visible, configuration runs too late, or propagation behaves differently than the team assumed.
The short version: check the effective logger level, check the attached handlers, and confirm where logging is configured in the real app startup path before rewriting the config again.
Start by separating “the app is not logging” from “the logs are going somewhere else”
These are different problems.
Sometimes the code really is not emitting the message because the level filters it out. Other times the log record exists, but the handler chain sends it to a file, process manager, worker stream, or platform sink that the team is not inspecting.
That distinction matters because the fix is either logger configuration or log destination visibility, not both.
What usually makes Python logs disappear
1. The effective level is too high
This is still the most common case. logger.info() and logger.debug() are called, but either the logger or one of its handlers only allows WARNING or higher.
2. No useful handler is attached
The logger exists, but nothing writes to stdout, stderr, a file, or the platform’s expected output path.
3. Logging is configured too late
basicConfig() or a custom logging setup may run after frameworks, worker bootstraps, or imported modules already created their own handler chain.
4. Another framework or runtime overrides your configuration
Gunicorn, Celery, uvicorn, Django, Flask extensions, and platform entrypoints often install their own logging behavior. Your local script may work while production behaves differently.
5. Propagation is misunderstood
A child logger may stop at its own handler, or it may bubble up to the root logger in ways the team did not expect.
A practical debugging order
1. Print the effective level and inspect handlers
Before editing config, confirm what is live right now:
- the logger’s effective level
- the root logger level
- attached handlers
- handler levels
This is the fastest way to see whether the record is blocked or simply routed elsewhere.
2. Confirm when configuration runs in the app lifecycle
Ask whether logging setup happens:
- before modules import application code
- before the worker or framework boots
- inside a code path that is not always reached
A correct configuration block can still fail operationally if it runs too late.
3. Check whether the runtime replaced the handler chain
If logs appear in a small local script but vanish in the real service, compare the runtime:
- local shell
- test runner
- Gunicorn or Celery worker
- container entrypoint
- platform-managed execution environment
This step is where many “works locally but not in production” logging incidents become understandable.
4. Verify propagation and root behavior
The child logger may be configured correctly while the root logger or parent chain changes the final result. This is especially common in larger apps with multiple modules.
5. Test one minimal log call in the real runtime path
Do not only test in an isolated REPL or one-off script. Emit one simple log in the actual process path that is failing.
That tells you whether the issue is configuration, runtime wiring, or simply “you were looking in the wrong place.”
A quick example to ground the problem
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info("hello")
Even this simple example can appear to “do nothing” if logging was configured earlier, if handlers were replaced, or if the platform collects logs somewhere other than the console you are watching.
What to change after you find the pattern
If level filtering is the problem
Lower the effective logger or handler level only where needed instead of broadly enabling noisy output everywhere.
If no visible handler exists
Attach the correct handler for the runtime you are using, especially if production expects stdout or a platform-managed stream.
If setup runs too late
Move logging initialization earlier in the startup path so imported modules and worker processes inherit the intended configuration.
If the runtime overrides your config
Adapt to the framework or worker’s logging model instead of fighting it with more ad hoc config blocks.
A useful incident checklist
When Python logs do not show up, use this order:
- inspect the effective level
- inspect attached handlers and handler levels
- confirm when logging config runs in startup
- compare the real runtime with the local test path
- verify propagation and root logger behavior
FAQ
Q. Why does basicConfig() seem to do nothing?
Because logging may already have been configured earlier in the process.
Q. Why do logs appear locally but not in production?
Because the runtime, worker model, or platform handler chain is often different.
Q. What is the fastest first step?
Print the effective logger level and the current handlers before changing config.
Q. If the logger emits in tests, is production config definitely correct?
No. Test runners and production workers often install different logging behavior.
Read Next
- If missing logs are slowing down memory investigations, continue with Python Memory Usage High.
- If you want the broader Python routing view first, browse the Python Troubleshooting Guide.
- If queue pressure is the bigger operational issue, compare with Python ThreadPoolExecutor Queue Growing.
Related Posts
- Python Memory Usage High
- Python Troubleshooting Guide
- Python ThreadPoolExecutor Queue Growing
- Python Celery Worker Concurrency Too Low
Sources:
While AdSense review is pending, related guides are shown instead of ads.
Start Here
Continue with the core guides that pull steady search traffic.
- Middleware Troubleshooting Guide: Redis vs RabbitMQ vs Kafka A practical middleware troubleshooting guide for developers covering when to reach for Redis, RabbitMQ, or Kafka symptoms first, and which problem patterns usually belong to each tool.
- Kubernetes CrashLoopBackOff: What to Check First A practical Kubernetes CrashLoopBackOff troubleshooting guide covering startup failures, probe issues, config mistakes, and what to inspect first.
- Kafka Consumer Lag Increasing: Troubleshooting Guide A practical Kafka consumer lag troubleshooting guide covering what lag usually means, which consumer metrics to check first, and how poll timing, processing speed, and fetch patterns affect lag.
- Kafka Rebalancing Too Often: Common Causes and Fixes A practical Kafka troubleshooting guide covering why consumer groups rebalance too often, what poll timing and group protocol settings matter, and how to stop rebalances from interrupting useful work.
- Docker Container Keeps Restarting: What to Check First A practical Docker restart-loop troubleshooting guide covering exit codes, command failures, environment mistakes, health checks, and what to inspect first.
While AdSense review is pending, related guides are shown instead of ads.