All articles

QUIC migration: what half a year in production taught us

Инженерная команда· Published 1/28/2026· 9 min

MTU surprises, unexpectedly high server CPU, and how we stopped fearing the 0-RTT handshake.

Why migrate at all

Our initial transport set was TLS + WebSocket. That's fine while the network is friendly. But as soon as long-lived TCP gets throttled, both suffer the same way — latency climbs, RTT jitters.

QUIC gives us three things that matter:

  1. 0-RTT handshake — reconnects after drops take milliseconds, not seconds.
  2. Multipath-ready (Apple/Google are already running real multi-path experiments).
  3. Less DPI fingerprinting surface — it's still maturing, signatures aren't as sharp.

Surprise #1 — MTU

Day one we noticed that on mobile networks (especially LTE) QUIC occasionally fragments. Some carriers' DPI drops UDP fragments — the handshake dies. Fix: forced PMTUd + a fallback frame size of 1200 bytes.

Surprise #2 — CPU

QUIC needs many more AEAD operations than TLS (every packet is encrypted individually). On the servers we saw +40% CPU compared to TLS connections. What saved us:

  • Batched packet processing (libquic allows it).
  • AES-NI on every x86 node.
  • For old-ARM mobile clients — fall back to TLS.

Surprise #3 — 0-RTT paranoia

At first we were afraid to enable 0-RTT: replay attacks, session ticket leaks. Turns out, for the VPN use case it's a non-issue — we authenticate the user via a separate handshake after the connection, and replaying a 0-RTT packet gets the attacker nothing useful. We turned it on and gained −180 ms on reconnects.

Outcome

QUIC is now the default transport on mobile. TLS stays as a fallback for desktop on harsh networks where UDP gets nuked.

Share