Query Planning and EXPLAIN
How the database decides to run your query, and how to read the plan.
When you submit a SQL query, the database does not immediately start scanning tables. It first runs the query through a component called the query planner (or query optimizer). The planner considers multiple strategies for answering the query, estimates the cost of each, and picks the one it believes will be fastest. Then it executes that plan.
Understanding how to read the plan is one of the most practical skills in SQL. When a query is slow, the plan tells you exactly why.
EXPLAIN
EXPLAIN shows the plan the database intends to use, without actually executing the query:
EXPLAIN SELECT * FROM orders WHERE customer_id = 42;
This is safe to run at any time since no data is read or changed.
EXPLAIN ANALYZE
EXPLAIN ANALYZE executes the query and shows both the planner's estimates and the actual measurements side by side:
EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 42;
This is the version you will use most for diagnosing slow queries. The comparison between estimated and actual values is where the insight lives.
For more detail, add BUFFERS:
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE customer_id = 42;
BUFFERS shows how many data pages were read from the cache versus disk, which tells you whether the slowness is a missing index or a disk I/O problem.
Reading the output
The output is a tree. Each node represents one step in the plan. Child nodes feed their output upward to parent nodes. The widest indentation is the innermost (earliest) step.
Seq Scan on orders (cost=0.00..450.00 rows=12 width=108)
(actual time=0.02..3.41 rows=15 loops=1)
Filter: (customer_id = 42)
Rows Removed by Filter: 4985
Breaking down the key pieces:
| Term | Meaning |
|---|---|
cost=0.00..450.00 | Estimated work: startup cost .. total cost (arbitrary units) |
rows=12 | Planner's estimate of rows this step produces |
actual time=0.02..3.41 | Real time in milliseconds: first row .. all rows |
actual rows=15 | Rows this step actually produced |
loops=1 | How many times this node ran (important for joins) |
The most useful signal is the gap between rows=12 (estimated) and actual rows=15 (actual). A large discrepancy means the planner's statistics are stale or inaccurate, which often leads to a bad plan choice.
Common node types
| Node | What it means |
|---|---|
Seq Scan | Reading the entire table row by row |
Index Scan | Using a B-tree index to find rows |
Bitmap Heap Scan | Using an index to collect row locations, then fetching in batches |
Nested Loop | For each outer row, scan the inner table (fast with indexes; slow without) |
Hash Join | Build a hash table from one side, probe with the other (good for large tables) |
Merge Join | Merge two pre-sorted inputs (fast when both are sorted already) |
A Seq Scan on a large table is the most common culprit in slow queries. It means the planner could not use an index, either because one does not exist or because the query is written in a way that prevents its use.
Diagnosing common problems
Seq scan on a large table where you expected an index: Check that the index exists. Check that the query filters on the column directly (not on a function of it: WHERE LOWER(email) cannot use an index on email).
Large row estimate discrepancy: Run ANALYZE tablename to refresh the planner's statistics. On rapidly-changing tables, autovacuum may not be keeping up.
High loop count on a nested loop: If a join node shows loops=50000, the inner scan is running 50,000 times. If that inner scan is a sequential scan, you have a classic N+1 at the query level. An index on the join column fixes this.
ANALYZE and VACUUM
PostgreSQL collects statistics about table contents to power the planner's estimates. ANALYZE refreshes them. VACUUM reclaims space from deleted rows and keeps the statistics accurate. PostgreSQL's autovacuum process runs both in the background automatically, but on tables that change rapidly you may need to trigger them manually.
A practical habit
Before guessing at a slow query, run EXPLAIN ANALYZE. The plan tells you exactly what the database is doing: which operation is the bottleneck, how many rows are involved at each step, and whether the planner's estimates match reality. Diagnosing performance from the plan is faster and more reliable than guessing.