PostgreSQL Performance Tuning: What Actually Moves the Needle

2026-03-18 · Nico Brandt

You read a postgresql performance tuning guide. You set shared_buffers to 25% of RAM. You restarted Postgres. You ran the slow query again.

Same speed.

Most tuning guides hand you 50 configuration knobs and hope you’ll turn the right ones. That’s backwards. For most applications, the bottleneck isn’t your config — it’s three bad queries and a missing index. This article covers the 3-4 changes that give you 80% of the performance gain. Everything else is fine-tuning.

Stop Guessing, Start Measuring

Before you optimize anything, you need to know what’s actually slow. Not what you think is slow — what the database says is slow.

Enable pg_stat_statements. It tracks every query’s execution count, mean time, and total time. One query tells you where your time goes:

SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

That list will surprise you. The query that runs 50,000 times a day at 2ms each is eating more total time than the one that runs once at 500ms. The ORM-generated query you never look at? Probably in the top 5.

This is step zero of postgresql query optimization. Everything that follows — indexes, query rewrites, connection pooling — is wasted effort if you’re optimizing the wrong queries. Same principle behind any performance investigation: measure first, then fix.

You found your slow queries. Now you need to figure out why they’re slow.

Read an EXPLAIN ANALYZE Like a Developer, Not a DBA

EXPLAIN shows you the query plan — what Postgres thinks will happen. EXPLAIN ANALYZE actually runs the query and shows what did happen. Always use ANALYZE. Estimates lie.

Three things to look for in the output:

Seq Scan on a large table. Postgres is reading every row. On a table with 10 rows, fine. On a table with 10 million rows, that’s your problem.

Nested Loop with high row counts. Usually an N+1 pattern — your ORM is firing one query per row instead of joining.

Sort with high memory usage. Postgres is sorting in memory (or worse, on disk) because there’s no index on your ORDER BY column.

Here’s a concrete example. You have a users table with 2 million rows:

EXPLAIN ANALYZE SELECT * FROM users WHERE email = '[email protected]';

You’ll see a Seq Scan — Postgres is checking every row for that email. Add the index:

CREATE INDEX idx_users_email ON users(email);

Run EXPLAIN ANALYZE again. Seq Scan becomes Index Scan. Query time drops from 200ms to 0.1ms. Three orders of magnitude from one line of SQL.

If the output shows a big gap between estimated rows and actual rows, your table statistics are stale. Run ANALYZE users; to update them. The planner makes decisions based on those stats — bad stats mean bad plans.

While you’re reading the output, watch for these anti-patterns: SELECT * pulling columns you never use, functions in WHERE clauses that prevent index usage (WHERE LOWER(email) = '...' won’t hit a standard index on email), and implicit type casts causing mismatches.

You can read EXPLAIN output now. The next question is where exactly to put the indexes you clearly need.

Index Strategy: Where, Not What

Every tuning guide lists index types — B-tree, GIN, GiST, BRIN. That’s not your problem. Your problem is knowing where to add them.

The rule is simple: if a column appears in WHERE, JOIN ON, or ORDER BY and the table has more than a few thousand rows, it probably needs an index.

Composite indexes need more thought. Column order matters — equality columns first, range columns last:

-- Good: equality (status) first, range (created_at) second
CREATE INDEX idx_orders_status_created
ON orders(status, created_at);

Partial indexes are the underrated move. If you’re always querying pending orders, don’t index the entire table:

CREATE INDEX idx_orders_pending
ON orders(created_at) WHERE status = 'pending';

Smaller index. Faster lookups. Less write overhead.

That write overhead is the trap. Every index slows down INSERT and UPDATE because Postgres updates every index on the table. Index what your queries actually need. Don’t index what might be useful someday.

To find where indexes are missing, check your table stats:

SELECT relname, seq_scan, idx_scan
FROM pg_stat_user_tables
WHERE seq_scan > 1000
ORDER BY seq_scan DESC;

Tables with high seq_scan and low idx_scan are doing full table scans repeatedly. Those are your targets.

Your queries are fast now. Your indexes are targeted. Then traffic spikes and the whole app chokes. The problem isn’t your queries anymore.

Connection Pooling: The Fix You Didn’t Know You Needed

PostgreSQL forks a new OS process for every connection. That’s not a metaphor — it literally calls fork(). Two hundred connections means 200 processes, each consuming real memory and CPU time.

Most web frameworks hold a database connection open for the entire request, even during template rendering when no queries are running. At 200 concurrent requests, you’re at 200 Postgres connections. At 500, Postgres starts refusing them.

PgBouncer fixes this. It sits between your app and Postgres, multiplexing thousands of application connections onto a small pool of actual database connections. Your app sees unlimited connections. Postgres sees 20. PgBouncer itself can handle 10,000+ concurrent connections with minimal overhead.

In transaction mode — the setting you want — PgBouncer assigns a real connection only when a transaction starts and returns it when the transaction ends. Point your app at PgBouncer’s port instead of Postgres directly.

On managed services, you may already have this. Supabase has a built-in pooler. AWS has RDS Proxy. Neon pools connections natively. If you’re running containers, PgBouncer is a single sidecar. This is often the single biggest infrastructure win for apps hitting connection limits or seeing intermittent timeouts.

Queries optimized, connections pooled. Is there anything left worth touching in the config?

The Config That Actually Matters (When You Can Change It)

If you’re on RDS, Cloud SQL, or Supabase, you may not control all of these — and that’s fine. The defaults are decent. But if you have access to postgresql.conf, four settings are worth adjusting.

shared_buffers: 25% of RAM, capped around 8 GB. This is Postgres’s own cache. Check your cache hit ratio with pg_stat_database — if it’s below 99%, bump this up.

work_mem: Per-sort, per-hash, per-connection. A query with three sort operations on a server with 100 connections at work_mem = 256MB needs… more RAM than you have. Start at 4 MB. Increase carefully.

effective_cache_size: 50-75% of total RAM. This doesn’t allocate memory — it tells the query planner how much OS cache to expect. Wrong value means the planner avoids index scans it should be choosing.

random_page_cost: If you’re on SSDs — and in 2026, you are — set this to 1.1. The default of 4.0 assumes spinning disks and biases the planner away from index scans.

That’s it. Four settings. The other 200+ in postgresql.conf are fine at defaults for 95% of workloads.

Now let’s make sure you don’t forget any of this.

The Checklist

You don’t need 50 knobs. You need a short list, in order:

  1. Enable pg_stat_statements. Find your actual slow queries.
  2. Run EXPLAIN ANALYZE. Fix the worst offenders.
  3. Add indexes where WHERE, JOIN, and ORDER BY hit unindexed columns.
  4. Set up connection pooling.
  5. Tune the four config values — if you have access.

That first tuning guide you read wasn’t wrong about shared_buffers. It just skipped the part where your queries needed fixing first.

The best-tuned Postgres is the one where you fixed the queries instead of the config.