Python Database Connections Not Closed: What to Check First
Last updated on

Python Database Connections Not Closed: What to Check First


When Python database connections stay open too long, the visible symptom may be pool exhaustion, slow requests, or hanging jobs, but the real issue is usually connection lifetime: cleanup flow, long transactions, or one code path that never releases resources.

That is why “connections are not closing” is often an overloaded complaint. Some connections are genuinely leaked. Others are not leaked, but they are held so long that the pool behaves as if it were leaking.

This guide focuses on the practical path:

  • how to separate leaked connections from long-held connections
  • what to inspect first in cleanup flow, session scope, and pool pressure
  • how request paths and background jobs quietly keep DB resources busy

The short version: first determine whether connections are truly leaked or just held too long, then inspect cleanup and exception paths, and finally compare pool pressure with request/job concurrency before changing pool settings.

If you want the broader Python routing view first, go to the Python Troubleshooting Guide.


Start with connection lifetime

The fastest useful split is simple: are connections leaked, or are they just held much longer than expected?

That difference changes the whole fix path.

It separates incidents like:

  • connections are borrowed and never returned
  • transactions stay open across too much application work
  • background jobs keep sessions alive quietly

Without that first split, teams often blame pool size when the real issue is connection ownership and cleanup.


Leaked versus long-held is the key branch

These two patterns can look almost identical externally:

  • active connections remain high
  • waiting callers increase
  • requests slow down
  • jobs start hanging behind the pool

But they are not the same operational problem.

The useful question is not just “is the pool exhausted?” It is “why are connections staying unavailable longer than the team expects?”


Common causes to check

1. Cleanup paths are incomplete

Connections or sessions are not closed on every code path.

This often happens when:

  • success paths clean up, but exception paths do not
  • one return path skips cleanup
  • helper functions hide ownership and the caller assumes cleanup happened elsewhere

In that case the pool drains slowly because borrowed resources never come back promptly.

2. Transactions stay open too long

The connection is not leaked, but it stays unavailable longer than intended.

Typical examples:

  • opening a transaction too early
  • doing network or CPU work while the transaction is still open
  • waiting on downstream code before commit or rollback

These paths often feel like “the DB is slow” even though the bigger issue is how long the application keeps the connection.

3. Background jobs or workers hold connections

Long-running tasks, workers, or scripts can quietly keep the pool busy.

This is common when:

  • a worker reuses a session incorrectly
  • one job opens a connection and keeps it through unrelated work
  • task retries and reconnect behavior create hidden pool pressure

That is why pool exhaustion often appears in request traffic even when the actual connection pressure lives in background execution.


A practical debugging order

When connections stay open too long, this order usually narrows the issue fastest:

  1. compare active, idle, and waiting usage
  2. identify which paths hold connections the longest
  3. inspect cleanup and exception paths
  4. compare pool pressure with job and request load
  5. decide whether the issue is leak, long hold time, or concurrency growth

This order matters because it prevents two common mistakes:

  • increasing pool size before understanding connection lifetime
  • blaming slow queries before checking whether application code is holding the connection too long

If the same incident also includes task backlog, compare with Python Celery Tasks Stuck.


A tiny example of the real problem

conn = engine.connect()
result = conn.execute(query)
# missing conn.close()

If connections are borrowed but not returned, the pool drains slowly until requests start waiting behind it.

The important point is not only the missing close itself. The more useful question is whether ownership is obvious enough that every path knows who is responsible for releasing the connection.


A good question for each DB path

For every query or transaction path, ask:

  • who acquires the connection or session
  • what work happens while it is held
  • what guarantees release on success
  • what guarantees release on failure

This framing helps because DB connection incidents are usually lifecycle bugs disguised as capacity problems.


FAQ

Q. Does pool exhaustion always mean the DB itself is slow?

No. It may also mean your application is holding connections too long, skipping cleanup on some paths, or pushing more concurrent work through the same pool than before.

Q. What should I inspect first in production?

Compare active, idle, and waiting usage first, then identify which request or job path holds connections longest.

Q. Can background jobs cause this even if web traffic looks normal?

Yes. Worker processes and task runners can quietly consume or hold DB resources long enough to make the whole system feel pool-starved.


Sources:

Start Here

Continue with the core guides that pull steady search traffic.