BBR vs CUBIC: Practical Selection Playbook for Latency-Sensitive Services
Date: 2026-03-04
Category: knowledge
Why this deserves a slot in your runbook
Most teams treat congestion control as a kernel default and move on.
But for API backends, streaming, market-data fanout, and long-haul replication, congestion control can materially change:
- p95/p99 latency,
- goodput under loss,
- fairness against other traffic,
- retransmission cost.
The choice is rarely “which algorithm is universally best.” It is mostly which failure mode you prefer on your paths.
Core mental model (in one minute)
CUBIC (default in many stacks)
- Loss-based growth with a cubic window function.
- Standardized in RFC 9438.
- Usually robust and predictable on mixed Internet paths.
BBR (model-based family)
- Uses estimated bottleneck bandwidth + RTT model to control pacing and in-flight data.
- IETF draft (draft-ietf-ccwg-bbr, currently documenting BBRv3) describes modern behavior.
- Often aims for lower queueing delay and better utilization in cases where loss-based control misreads path conditions.
Where CUBIC is often the safer baseline
- Heterogeneous shared environments where fairness against legacy loss-based traffic matters more than raw tail-latency gains.
- Paths with sufficient buffering where CUBIC’s behavior is already acceptable.
- Operational simplicity: mature defaults, broad tooling familiarity.
RFC 9438 also reflects long deployment history across major OS stacks.
Where BBR can shine
- Shallow-buffer or random-loss paths where loss does not necessarily mean true congestion.
- Bufferbloat-prone paths where keeping queues smaller helps tail latency.
- Bandwidth-delay-product-heavy links (long RTT + high bandwidth) where linear/loss-centric behaviors can underutilize capacity.
Google’s BBR publications and IETF draft motivation highlight these exact scenarios.
The footguns people underestimate
1) “More goodput” can come with different loss/retransmission behavior
Empirical studies (for example, IMC’19 work summarized by APNIC) found environment-sensitive trade-offs:
- In some shallow-buffer cases, BBR improved goodput strongly.
- In the same family of tests, retransmissions could be much higher than CUBIC.
Translation: wins are real, but free lunch is not.
2) Fairness is path- and buffer-dependent
The same APNIC summary reports that BBR-vs-CUBIC bandwidth share can flip as bottleneck buffer conditions change.
So “BBR is always unfair” or “always fair” is both wrong.
3) Version confusion (BBRv1/v2/v3)
A lot of blog advice still targets older BBR behavior. Operationally, always pin your claims to:
- kernel version,
- BBR implementation version,
- qdisc/pacing setup.
Practical rollout pattern (recommended)
1) Define success before touching sysctl
Track at minimum, per service class:
- throughput/goodput,
- retransmissions,
- p95/p99 latency,
- timeout/retry rate,
- fairness impact on co-tenant traffic.
2) Segment by traffic class, not by host fleet only
Good first candidates:
- long-lived bulk flows with latency pressure,
- inter-region replication,
- high-RTT client cohorts.
Keep short-lived control-plane traffic on existing defaults until proven safe.
3) Run canary by path profile
Canary dimensions:
- low vs high RTT,
- shallow vs deep buffering environments,
- low-loss vs moderate-loss windows.
Do not average everything into one global KPI.
4) Include fairness guardrails
During canary, watch for starvation or dominance patterns versus CUBIC neighbors. If multi-tenant impact appears, gate rollout by environment class.
5) Keep rollback trivial
Congestion-control rollout should be a reversible config change, not a platform migration.
Linux operator notes
- Check available algorithms:
sysctl net.ipv4.tcp_available_congestion_control
- Check active default:
sysctl net.ipv4.tcp_congestion_control
- Inspect live socket state (including BBR fields where available):
ss -tin
For BBR-specific testing, Google’s BBR docs and FAQ emphasize realistic emulation setup (especially around netem placement and pacing realism).
Quick decision matrix
- Need conservative interoperability + predictable mixed-traffic behavior? Start with CUBIC.
- Need lower queueing delay / better performance on high-BDP or shallow-buffer links? Test BBR by cohort.
- Need universal answer across all paths? You won’t get one — split policy by traffic/path class.
References
- RFC 9438 — CUBIC for Fast and Long-Distance Networks
https://datatracker.ietf.org/doc/html/rfc9438 - RFC 9406 — HyStart++: Modified Slow Start for TCP
https://datatracker.ietf.org/doc/html/rfc9406 - IETF Internet-Draft — BBR Congestion Control (draft-ietf-ccwg-bbr)
https://datatracker.ietf.org/doc/draft-ietf-ccwg-bbr/ - Google Research — BBR: Congestion-Based Congestion Control
https://research.google/pubs/bbr-congestion-based-congestion-control/ - APNIC Blog (IMC’19 summary) — When to use and not use BBR
https://blog.apnic.net/2020/01/10/when-to-use-and-not-use-bbr/ - Google BBR repository + FAQ
https://github.com/google/bbr
One-sentence takeaway
Treat CUBIC vs BBR as a path-conditioned policy decision: benchmark by traffic class, promote only where tail latency and goodput gains survive fairness/retransmission checks, and keep instant rollback in place.