Edge Functions vs Serverless: Pick Wrong and You'll Pay for It

2026-04-19 · Nico Brandt

Every edge functions vs serverless article ends the same way: “it depends.” That’s not advice. That’s a Wikipedia entry wearing a blog post costume.

Here’s what actually depends: whether you’ll overpay on cold starts, get blindsided by CPU billing, or deploy to the edge only to discover your database is 200ms away in Virginia. Three ways to pick wrong. Three ways to pay. By the end of this, you’ll have five yes-or-no questions that make the decision for you. No “it depends.”

The Cold Start Tax Is Real — and It Just Got Worse

The numbers aren’t subtle. Cloudflare Workers cold-start in under 5ms. AWS Lambda? 100ms to over a second. OpenStatus ran benchmarks across six global regions: edge P50 was 106ms end-to-end, serverless cold P50 hit 859ms. That’s an 8x gap — not a rounding error.

Then August 2025 happened. AWS started billing for Lambda’s INIT phase — the initialization time that used to be free. Cold starts aren’t just slow anymore. They cost money. For bursty workloads with frequent cold starts, Lambda bills jumped 10–50% overnight. Meanwhile, Cloudflare maintains a 99.99% warm-start rate through traffic coalescing. Lambda’s “cold roulette” averages out to a P50 of 305ms when you blend warm and cold invocations together.

If you stopped here, you’d conclude edge functions win on every axis. Speed, cost, reliability. Ship everything to Workers.

That conclusion has bankrupted more than one startup’s compute budget.

The CPU Time Trap: When Edge Gets More Expensive

Here’s the billing difference that flips the entire calculus. Cloudflare charges CPU time only — the milliseconds your code actually uses the processor. Lambda charges wall time — total duration including I/O waits. For a function that spends 5ms computing and 200ms waiting on a network call, Workers bills 5ms. Lambda bills 205ms.

Sounds like edge wins again. Until your function actually computes something.

At 10 billion requests with 15ms average CPU time, Workers costs roughly $5,969. Lambda costs $6,557. Edge wins by 9%. Push that CPU time to 80ms per request — image processing, PDF generation, anything compute-heavy — and Lambda wins decisively. Factor in the INIT billing change making Lambda’s base cost higher, and the crossover drops even lower than it used to be.

The number you need: once your function consistently burns more than ~50ms of CPU time per invocation, serverless is cheaper. That’s the line. Here’s what falls on each side:

Cost isn’t the only trap, though. There’s a performance gotcha that’s even more counterintuitive.

The Database Problem Nobody Warns You About

An edge function running in São Paulo that queries a database in us-east-1 is slower than a serverless function co-located with that database. You moved compute to the edge. The data didn’t follow.

It gets worse. Cloudflare runs 300+ Points of Presence worldwide. Each PoP opening its own database connections means connection exhaustion. Traditional connection poolers like PgBouncer run in one region — they can’t help you at the edge. And if you’re trying to squeeze every millisecond out of the database side, check out PostgreSQL performance tuning for the co-location optimization playbook.

Solutions that actually work in production:

The honest take: if your app is write-heavy or needs strong consistency, pure edge deployment isn’t the answer. You need a hybrid. And knowing when to use each is the whole game — which brings us to the part most guides are too polite to write.

Five Questions That Make the Decision for You

No flowchart needed. Answer these in order:

1. Does your function use more than 50ms CPU time? Yes → serverless. Image processing, PDF generation, ML inference, video transcoding — all serverless. The CPU billing math doesn’t lie.

2. Does it need Node.js APIs — fs, child_process, native modules like Prisma or sharp? Yes → serverless. Edge runtimes run V8 isolates, not Node.js. No filesystem, no native bindings, 1–4MB bundle cap. If it won’t run in a Service Worker, it won’t run at the edge.

3. Does it query a database in a single region? Yes → serverless co-located with your DB. Or hybrid: edge cache for reads, serverless for writes.

4. Is latency the primary concern and compute is light? Auth checks, redirects, A/B tests, personalization, geo-routing — all under 10ms CPU. This is edge territory.

5. Is your traffic globally distributed with bursty patterns? Edge. Cold starts are effectively free, warm rate is 99.99%, and you’re paying fractions of a cent per million requests for light compute.

The answer most teams land on: use both. Edge for the request path — auth, routing, personalization. Serverless for the work path — database operations, heavy compute, anything requiring full Node.js. The hybrid pattern isn’t a cop-out. It’s the correct architecture for most production apps in 2026.

Worth noting: Deno Deploy sits in the middle — edge distribution with broader npm compatibility than Workers. If pure edge feels too constraining but you want global distribution, it’s a legitimate third option.

The Hybrid Pattern in Practice

Next.js makes this concrete. Same app, per-route runtime selection:

// Middleware: runs at the edge (auth, geo-routing)
export const config = { runtime: 'edge' };

// API route: runs on serverless (database writes)
export const runtime = 'nodejs';

The architecture: auth middleware on edge (sub-5ms) — and if you’re choosing between JWT vs session authentication for that edge layer, the stateless nature of JWT pairs naturally with edge distribution — API routes on regional serverless (co-located with Postgres), static assets on CDN. Three tiers, one deploy. If you’re already thinking about how your bundler handles this split, you’re asking the right question.

The one-liner: edge for reads, serverless for writes, CDN for assets.

The Bottom Line

“Pick wrong and you’ll pay” — now you know the three ways people pay. The cold start tax on Lambda costs speed and money since August 2025. The CPU time trap on Workers makes edge more expensive the moment compute gets heavy. And deploying to the edge with a distant database makes latency worse than staying regional.

The developers who treat edge functions vs serverless as a binary choice are the ones who keep paying. The ones who ask five questions and split their workload stop overpaying for both.

Those five questions took me two production incidents and one very expensive month of Workers billing to learn. You just got them in six minutes.