Server-Sent Events vs WebSockets: Why Half of You Picked Wrong

2026-04-15 · Nico Brandt

You’re wiring up a live feature. You search “server sent events vs websockets,” open five articles, and every one gives you a comparison table that ends with “it depends.”

It doesn’t. Not really.

Most developers default to WebSocket because it sounds more capable. Half the time, SSE would’ve been less code, automatic reconnection, and zero connection state management. But pick SSE when you actually need bidirectional communication and you’re rewriting your transport layer in six months.

Three questions. That’s all it takes to pick right the first time.

Three Questions That Pick For You

Question 1: Does the client need to send data back over this connection?

Not “does the client ever send data” — your app has fetch for that. The question is whether the client needs to push messages over the same persistent connection. Chat messages, cursor positions, game inputs — yes. Notifications, dashboards, live feeds — no.

Yes → WebSocket. No → keep going.

Question 2: Do you need to send binary data?

Audio streams, video frames, file chunks — WebSocket handles binary natively with ArrayBuffer and Blob support. SSE is UTF-8 text only. You can Base64-encode binary over SSE, but you’re adding 33% overhead to pretend a text protocol is something it’s not.

Yes → WebSocket. No → keep going.

Question 3: Are you behind infrastructure you don’t control?

Corporate proxies — Sophos XG, WatchGuard, McAfee Web Gateway — silently buffer SSE streams. Your events queue up and arrive in delayed bursts, or never arrive at all. Your code throws no errors. Your users see nothing. If your users sit behind enterprise firewalls and you can’t set X-Accel-Buffering: no on the proxy, SSE becomes a silent failure mode.

Yes → WebSocket. No → SSE.

It’s simpler, it auto-reconnects with Last-Event-ID for stream resumption, and it runs over plain HTTP. One thing worth updating in your mental model: the old “6 connections per domain” SSE limit is an HTTP/1.1 artifact. HTTP/2 multiplexing eliminated it. If that limitation is still in your decision criteria, your information is from 2018.

The framework tells you which to pick. Here’s what the code difference actually looks like.

Same Feature, Both Protocols: A Notification Feed

Same feature. Server pushes notification events, client renders them. Two implementations.

SSE server (Node.js):

app.get('/notifications', (req, res) => {
  res.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache',
    Connection: 'keep-alive',
  });

  const send = (data) => {
    res.write(`id: ${data.id}\ndata: ${JSON.stringify(data)}\n\n`);
  };

  emitter.on('notification', send);
  req.on('close', () => emitter.off('notification', send));
});

SSE client:

const source = new EventSource('/notifications');
source.onmessage = (e) => renderNotification(JSON.parse(e.data));

That’s it. The browser reconnects automatically. If the connection drops, EventSource re-establishes it and sends the last event ID so the server can replay what was missed.

WebSocket server:

const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });

wss.on('connection', (ws) => {
  const send = (data) => {
    if (ws.readyState === WebSocket.OPEN) {
      ws.send(JSON.stringify(data));
    }
  };

  emitter.on('notification', send);
  ws.on('close', () => emitter.off('notification', send));
  ws.on('pong', () => { ws.isAlive = true; });
});

// Detect dead connections
setInterval(() => {
  wss.clients.forEach((ws) => {
    if (!ws.isAlive) return ws.terminate();
    ws.isAlive = false;
    ws.ping();
  });
}, 30000);

WebSocket client:

function connect() {
  const ws = new WebSocket('ws://localhost:8080');
  ws.onmessage = (e) => renderNotification(JSON.parse(e.data));
  ws.onclose = () => setTimeout(connect, 1000 * Math.random());
}
connect();

Count the differences. The SSE server is a standard HTTP endpoint — any reverse proxy, CDN, or load balancer that handles HTTP handles SSE. The WebSocket server needs a separate upgrade handshake, connection state tracking, and manual ping/pong for dead connection detection.

The SSE client is two lines. The WebSocket client needs manual reconnection with backoff. That setTimeout with Math.random() is the bare minimum — production code needs exponential backoff and jitter, or ten thousand clients reconnect simultaneously when your server restarts.

For a notification feed — server pushes, client listens — SSE is half the code and zero state management. WebSocket works, but you’re paying complexity tax on a feature that only goes one direction.

Most apps aren’t this clean, though. What happens when the same app needs notifications and chat?

The Hybrid Pattern: Use Both (Seriously)

Here’s what nobody’s teaching: most production apps shouldn’t pick one.

A typical SaaS app has three kinds of real-time needs. Notifications — server pushes, client listens. Unidirectional. Chat or collaborative editing — both sides talk. Bidirectional. Everything else — profile updates, form submissions, API calls — plain request-response.

The hybrid architecture matches transport to use case:

This isn’t over-engineering — it’s the opposite. Using WebSocket for notifications is the over-engineering. You’re maintaining connection lifecycle code for a feature that doesn’t need it. SSE connections are cheaper at scale because they’re standard HTTP.

Concrete example: a project management tool. Task assignment notifications come through SSE — one EventSource connection, server pushes events. Team chat runs on WebSocket — bidirectional, low latency. Task creation and editing go through fetch. Three transports, each matched to what it’s actually doing.

The AI angle confirms this pattern. LLM token streaming started with SSE — OpenAI, Anthropic, everyone streaming tokens unidirectionally. But as AI apps got interactive, the ecosystem shifted. MCP moved from SSE to WebSocket. Vercel’s AI SDK deprecated its SSE transport. Streaming tokens (unidirectional) → SSE. Interactive AI features (bidirectional) → WebSocket. The framework holds.

Clean in theory. But theory doesn’t survive production without a few scars.

What Will Bite You in Production

The comparison articles cover the protocol differences. Here’s what they skip.

SSE: silent proxy buffering. This is the sneakiest failure mode in real-time web development. Enterprise proxies buffer SSE responses — your events queue up and arrive in delayed bursts, or not at all. No errors in your code. No warnings in your logs. Set X-Accel-Buffering: no and Cache-Control: no-cache. If you can’t control the proxy, SSE is a non-starter in that environment.

SSE: invisible in DevTools. Chrome’s Network tab doesn’t display SSE data during active streaming. You’re debugging blind until you discover the EventStream sub-tab or wire up custom logging. This alone costs hours on a first SSE project.

WebSocket: reconnection is your problem. SSE auto-reconnects with Last-Event-ID for free. WebSocket gives you nothing. You build exponential backoff yourself, handle state sync after reconnection, and plan for thundering herd when your server restarts and thousands of clients hit the upgrade endpoint at once. The kind of thing that works in dev and crumbles under load.

Both: mobile kills your connection. iOS terminates persistent connections within seconds of backgrounding. Android varies by manufacturer. Neither protocol survives this — you need a reconnect-and-catch-up strategy regardless of which you chose.

WebSocket: security isn’t default. WebSocket connections aren’t bound by same-origin policy. Cross-Site WebSocket Hijacking is a real attack vector — unlike standard HTTP requests, the browser won’t enforce origin restrictions on the upgrade handshake. Validate the Origin header on every connection.

Production is where the choice gets tested. But what if you already shipped and chose wrong?

The Migration Tax

Remember those five articles that told you “it depends”? Here’s what they should’ve said.

SSE → WebSocket is a transport-layer rewrite. Server goes from res.write to a WebSocket library with connection management. Client goes from two lines of EventSource to a reconnecting WebSocket wrapper with thirty-plus lines of state handling. Every endpoint. Every event type.

WebSocket → SSE means dropping client-to-server over the persistent connection. If you needed bidirectional, you can’t drop it. If you didn’t — and many apps don’t — the simplification is real, but it’s still a rewrite.

This is why the three-question framework matters. Five minutes of honest answers saves weeks of migration. The answer was never “it depends.” Start with SSE unless the framework sends you to WebSocket. It’s the smaller bet — if you outgrow it, you’ll know exactly when and why, because a specific feature will need bidirectional communication. That’s a clear signal, not a guess.

Match the transport to the feature. Ship it.