You open the query log and count 47 SQL statements for what should be one page load.
The orm vs raw sql debate flares up every time this happens. The urge to rip out the ORM and rewrite everything in hand-crafted queries is real — I’ve felt it, you’ve felt it, your tech lead felt it last quarter. But before you torch your abstraction layer, it’s worth asking: is the ORM actually the problem, or just the messenger?
I pulled up EXPLAIN ANALYZE on a few dozen queries across three production codebases. Here’s what the numbers actually say.
What Your ORM Is Doing Behind Your Back
ORMs don’t write bad SQL. They write more SQL than you’d write yourself — and for simple queries, that’s a rounding error. For complex ones, it compounds fast.
The mechanism is query splitting. Your ORM turns one logical operation into multiple database calls. A Prisma findMany with two include clauses fires three separate queries — one for the parent, one per relation. A nested create with two levels of nesting generates six SQL statements and five round trips. You asked for one thing. The ORM negotiated it into a conversation.
Then there’s the N+1 problem. Fetch 50 orders. ORM fires one query for the orders, then 50 individual queries to load each order’s customer. That’s 51 queries instead of a single JOIN. Your ORM n+1 problem isn’t a bug — it’s the default behavior of lazy loading, and most ORMs ship with it on.
This matters more in 2026 than it did in 2020. Serverless and edge deployments amplify round-trip costs. Inter-region database latency runs around 70ms per hop. When your ORM splits one query into six, that’s not 6× the query time — it’s 6× the network overhead on top of it.
A 2025 peer-reviewed benchmark tested Prisma ORM against raw SQL on PostgreSQL across eight query patterns. Raw sql performance won every category: up to 5× faster execution, 6–9× less CPU, 2–3× less memory.
Those numbers look damning. But they measure the worst case — complex queries on a dataset with 15+ interrelated tables. Your average CRUD endpoint isn’t in that category. So which of your queries actually cross the line?
The Three Queries That Always Cross the Line
After years of reviewing query plans, I’ve stopped thinking about orm vs raw sql as a binary choice. It’s three specific patterns where the ORM consistently costs more than it saves — and everything else where it’s fine.
Pattern 1: Aggregation reports. Anything with GROUP BY, HAVING, window functions, or CTEs. ORMs either can’t express these cleanly or generate a nested mess of subqueries. You end up writing raw SQL wrapped in an ORM escape hatch anyway. The ORM just makes you fight for it.
Pattern 2: Multi-join reads with conditional filtering. The classic dashboard query. Four or more tables, conditional WHERE clauses, pagination. This is where query splitting kills you. Instead of one efficient query that the planner can optimize as a unit, the ORM fires a waterfall of sequential fetches — each one waiting for the last to return IDs before it can start.
Pattern 3: Bulk conditional writes. “Update all orders where status is X, set priority based on Y.” A single UPDATE...FROM in raw SQL. Your ORM? It loads every matching row into memory, mutates the objects, and writes them back one at a time. I’ve seen this turn a 50ms operation into a 12-second one on a table with 80K rows.
The inverse is just as important: CRUD endpoints, simple relation traversal, migrations, basic filtering — your ORM handles these fine. Don’t rewrite what isn’t broken. The goal of database query optimization orm-side isn’t eliminating the ORM. It’s knowing when to use orm features and when to step around them.
But don’t take my word for it. Let me show you what the query planner sees.
EXPLAIN Doesn’t Lie: ORM vs Raw SQL Under the Hood
Here’s where most articles hand-wave. Let’s look at actual query plans.
The N+1 fix. An ORM eager-loading orders with customers on a 50K-row table:
-- ORM generates:
SELECT * FROM orders WHERE created_at > '2026-01-01'; -- 1 query
SELECT * FROM customers WHERE id = $1; -- × 50 queries
Seq Scan on customers (cost=0.00..12.50 rows=1)
-- repeated 50 times, total: ~625 cost units
The raw SQL alternative:
SELECT o.*, c.name, c.email
FROM orders o
JOIN customers c ON c.id = o.customer_id
WHERE o.created_at > '2026-01-01';
Hash Join (cost=15.00..245.00 rows=4800)
-> Seq Scan on orders (cost=0.00..180.00 rows=4800)
-> Hash (cost=12.50..12.50 rows=500)
-> Seq Scan on customers (cost=0.00..12.50 rows=500)
One pass. Hash join instead of 50 nested loops. The planner builds a hash table of customers once and matches in memory. Total cost drops from ~625 units across 51 queries to ~245 in one query.
The aggregation gap. An ORM trying to get monthly revenue with customer segments:
-- ORM generates (simplified):
SELECT * FROM orders WHERE ...;
-- then groups and sums in application code
The raw SQL with a CTE:
WITH monthly AS (
SELECT date_trunc('month', created_at) AS month,
customer_segment, SUM(total) AS revenue
FROM orders
GROUP BY 1, 2
)
SELECT * FROM monthly ORDER BY month, revenue DESC;
The ORM pulls every row to the application and aggregates in your runtime. The CTE does a single pass with an index on created_at. On a million-row orders table, that’s the difference between moving a million rows over the wire and moving a few hundred aggregated results.
How to read EXPLAIN in 60 seconds: Seq Scan on a big table without a filter is a red flag. Nested Loop with high row counts means the planner couldn’t find a better join strategy — often because the ORM gave it queries in pieces instead of as a whole. When actual rows diverges wildly from estimated rows, the planner is guessing because it can’t see the full picture.
The ORM’s SQL isn’t wrong. It’s just not what you’d write. And for your /api/users/:id endpoint, that doesn’t matter. For your reporting dashboard with six joins and a date range filter, it matters a lot.
So now you know which queries to fix. The next question is how to introduce raw SQL without scattering escape hatches across your codebase.
The Hybrid Pattern That Actually Ships
The answer isn’t “ORM vs raw SQL.” It’s a clean boundary between them.
The repository pattern. Your ORM owns models, migrations, and simple CRUD. Raw SQL lives in a dedicated queries module — named files for reports, dashboards, and bulk operations. Not raw SQL strings scattered in controllers. Structured, version-controlled query files that your team reviews like any other code.
The decision checklist (copy-paste this into your team’s wiki):
- More than 2 JOINs? → Consider raw SQL
- Using
GROUP BY, window functions, or CTEs? → Raw SQL - Bulk update with conditions? → Raw SQL
- Simple CRUD or relation traversal? → ORM is fine
- Response time > 100ms on a query that should be fast? → Run EXPLAIN before rewriting
When you do reach for raw SQL, you don’t have to give up type safety. Prisma 6 added $queryRaw with proper TypeScript types. SQLx in Rust verifies your SQL at compile time — typos and type mismatches caught before the code runs. The “raw SQL means stringly-typed chaos” argument expired in 2024.
There’s a less obvious reason raw SQL is gaining ground in 2026: AI coding assistants write better raw SQL than ORM-specific syntax. Vanilla SQL has orders of magnitude more training data than Prisma’s query API or Django’s ORM methods. The Raw+DC pattern — raw queries paired with typed data classes — is emerging specifically because AI tools handle it more reliably. When your copilot can generate and debug your queries without hallucinating framework-specific methods, that’s a real productivity edge.
Your ORM still handles migrations, schema management, and the 80% of queries that don’t need optimization. The 20% that do get clean, reviewable raw SQL in a dedicated layer. No spaghetti. No rewrites.
The Bottom Line
Those 47 queries in your log? Most of them are fine.
The three that aren’t — your aggregation reports, your multi-join dashboards, your bulk conditional updates — are costing you more than the ORM saves on the other forty-four. That’s the real orm vs raw sql trade-off: not a philosophical stance, but a pragmatic line you draw query by query.
Here’s your Monday morning action: run EXPLAIN ANALYZE on your three slowest endpoints. If you see sequential scans on joined tables or nested loops with row counts in the thousands, you’ve found your candidates. Pull those queries into a dedicated module. Write them by hand. Keep the ORM for everything else.
Your ORM isn’t the enemy. Your ORM on the wrong query is.