Cloudflare Durable Objects SQLite vs D1 Selection Playbook
Date: 2026-04-06
Category: knowledge
Domain: software / cloudflare / distributed systems
Why this matters
Cloudflare now gives you two very different ways to run SQLite-shaped workloads at the edge:
- D1 = managed serverless SQL database
- SQLite-backed Durable Objects = stateful compute objects with an embedded SQLite database per object
At first glance they can look confusingly similar because both speak SQLite-ish SQL and both live inside the Workers ecosystem. But architecturally they solve different problems.
If you pick the wrong one, you usually get one of these failure modes:
- you put coordination-heavy logic into D1 and fight contention,
- you put global relational queries into Durable Objects and reinvent a database badly,
- you create a giant singleton Durable Object and cap your scale on one hot shard,
- or you choose D1 for ultra-hot per-entity state when a colocated object would be simpler and lower-latency.
The clean mental split is:
- D1 is a managed SQL product.
- Durable Objects + SQLite is a distributed-systems building block.
That sounds abstract, but the design consequences are huge.
1) Fast mental model
D1
Think of D1 as:
- a managed serverless SQLite database,
- queried from Workers or Pages,
- designed for relational application data,
- with Time Travel backups / point-in-time recovery,
- and optional global read replication for read-heavy workloads.
Good default fit:
- normal app data,
- dashboards,
- user/account/content tables,
- read-heavy web apps,
- multi-DB per-tenant or per-user designs.
SQLite-backed Durable Objects
Think of SQLite-backed Durable Objects as:
- a single-threaded stateful actor,
- with private strongly consistent storage,
- where SQLite lives inside the object’s execution context,
- and where code runs right next to the data.
Good default fit:
- coordination,
- per-entity state machines,
- chat rooms / game sessions / documents,
- turn/order/booking/inventory serialization,
- per-tenant hot state,
- WebSocket hubs,
- scheduled per-entity work.
The key difference:
- D1 wants you to think in databases.
- Durable Objects want you to think in identities and coordination atoms.
2) The decisive architectural difference
The most important question is not “Do I need SQL?” It is:
Do I need a managed relational database, or do I need serialized stateful compute around each entity?
Pick D1 when the center of gravity is the database
Choose D1 when your application naturally wants:
- tables queried across many rows/entities,
- ad-hoc SQL reads,
- a normal relational app model,
- managed backups / recovery,
- read scaling through replicas,
- and less custom coordination logic.
Typical examples:
- SaaS user/account tables,
- product catalogs,
- CMS-like content,
- analytics/control-plane metadata,
- application backoffice data.
Pick Durable Objects when the center of gravity is coordination
Choose SQLite-backed Durable Objects when your real problem is:
- “all operations for this entity must be serialized,”
- “this thing needs one globally unique owner,”
- “I want compute and storage colocated for each shard,”
- or “I need per-entity state + timers + sockets + transactions together.”
Typical examples:
- one object per room,
- one object per match,
- one object per document,
- one object per user session,
- one object per trading strategy / per-symbol coordinator,
- one object per tenant for hot mutable state.
This is why Cloudflare’s own docs frame Durable Objects as a lower-level compute-with-storage building block for distributed systems, while D1 is the managed SQL database.
3) What D1 is especially good at
A) Managed SQL without inventing your own database layer
D1 is the simpler choice when you want standard relational storage and do not want to design your own sharding and routing scheme around identities.
That simplicity matters. A lot of systems do not actually need actor-style coordination; they just need a database.
B) Read-heavy global apps
D1’s read replication is a real advantage when:
- users are globally distributed,
- reads dominate writes,
- and the latency problem is mostly “users are far from the primary.”
Cloudflare’s docs make an important point here: read replication only helps if you use the D1 Sessions API. Without Sessions API, reads keep going to the primary.
So the operational rule is:
- if you enable D1 read replication,
- treat bookmark/session propagation as part of the application contract.
C) Cross-entity querying is natural
D1 is better when you want SQL to span lots of entities naturally:
- joins across tables,
- reporting queries,
- list/search/filter admin views,
- “show me everything for this account/team/project,”
- or any workflow where your unit of interaction is not a single coordination shard.
D) Managed recovery and familiar database ergonomics
D1 gives you:
- Time Travel recovery,
- documented limits,
- a straightforward Worker binding API,
- and a product model that feels like “database first,” not “distributed primitive first.”
That is often exactly what you want.
4) What SQLite-backed Durable Objects are especially good at
A) Zero-network database access inside the object
The big architectural trick of SQLite-backed Durable Objects is that SQLite is embedded in the same execution context as the object logic.
That means:
- no separate DB hop,
- synchronous SQL API,
- extremely tight read/compute loops,
- and fewer async interleaving hazards during object-local logic.
This is not just a performance detail. It changes how much coordination logic feels practical to write.
B) Strict serialization per entity
If all operations for entity X must happen in order, Durable Objects are a better conceptual fit than “every request hits a shared DB and we hope constraints/transactions are enough.”
Examples:
- booking the same seat,
- applying edits to the same doc,
- advancing the same game turn,
- sequencing commands for the same device,
- maintaining one authoritative room/member/session state.
C) Stateful compute + storage + alarms in one place
This combination is easy to underrate. A SQLite-backed Durable Object can combine:
- in-memory hot state,
- durable SQL state,
- key-value storage,
- alarms/timers,
- and live connections.
That makes it excellent for entity-local control loops.
D) Per-entity sharding as a first-class design
Durable Objects scale out by having many objects, not one giant one. The docs explicitly warn against creating a single global object.
This is the right pattern:
- one object per logical unit of coordination.
This is the wrong pattern:
- one object for the whole product.
If your design can be cleanly decomposed into many independent identities, Durable Objects are powerful. If not, they can become an awkward box.
5) The easiest decision rule
Use this quick chooser:
Start with D1 if:
- your app mostly wants a normal SQL database,
- you need cross-entity queries often,
- reads dominate writes,
- you want managed DB ergonomics,
- you do not need per-entity serialized compute,
- and your consistency needs fit D1’s model.
Start with SQLite-backed Durable Objects if:
- the core problem is coordination,
- your natural shard key is obvious,
- each entity wants its own tiny private database,
- you need state + logic + timers + sockets together,
- or hot paths benefit from data/logic colocation.
Use both when:
- Durable Objects own hot, serialized, entity-local state,
- while D1 owns broader relational/queryable product data.
In practice, “use both” is often the winning architecture.
6) The deepest mistake to avoid: confusing per-entity SQL with shared relational SQL
Because Durable Objects now have SQLite, it is tempting to think:
“Great, I can just use Durable Objects as my database.”
Sometimes yes. Often no.
The question is whether your SQL is:
Entity-local SQL
Good fit for Durable Objects.
Examples:
- one room’s messages,
- one match’s event log,
- one user’s local working set,
- one tenant’s hot coordination state,
- one document’s edit history.
Global / cross-entity SQL
Usually better fit for D1.
Examples:
- global reporting,
- admin search across all customers,
- joins across many unrelated entities,
- broad filtering/ranking/listing,
- “top 100 across the whole product.”
Durable Objects make local state elegant. They do not magically make global relational querying free.
If you routinely need cross-object joins, you probably want D1 for that layer.
7) Performance reality, without fairy dust
D1 performance shape
Cloudflare documents that each individual D1 database is inherently single-threaded and processes queries one at a time. Throughput depends heavily on query duration. They even give rough guidance:
- ~1 ms average query → around 1,000 queries/sec
- ~100 ms average query → around 10 queries/sec
That is a useful reminder: D1 is not a magical infinitely parallel database. Each DB has a serial bottleneck, though read replication changes the read path shape.
Durable Object performance shape
Cloudflare’s Durable Object guidance says a single object can handle roughly 500–1,000 requests/sec for simple operations, depending on work per request.
That means the design question is not “Can one object scale forever?” It is:
- can I shard naturally into many objects?
- can hot keys be spread?
- am I accidentally making one object the universal bottleneck?
The real trade-off
- D1 centralizes a relational DB interface.
- Durable Objects decentralize into many identity shards.
So performance success is usually about choosing the right unit of scale:
- database-level read scaling / managed SQL → D1
- entity-level coordination sharding → Durable Objects
8) Consistency model differences that matter in app design
Durable Objects
Durable Objects are the cleaner choice when you need strict serialized handling per entity. That is their superpower.
You can reason about one object as one authoritative owner of one stream of state transitions. That drastically simplifies a whole class of race conditions.
D1
D1 is relational and single-primary for writes, but when using read replication there is a very important nuance:
- replicas are asynchronously updated,
- therefore replica lag exists,
- and Cloudflare’s answer is the Sessions API which gives sequential consistency for a logical session via bookmarks.
This is good, but it is a different mental model from “one authoritative object serializes everything for entity X.”
Practical translation:
- if your problem is “avoid races for one entity,” prefer Durable Objects.
- if your problem is “serve global reads fast while keeping a managed DB model,” D1 is usually better.
9) Operational details people miss
A) New Durable Object classes should use SQLite storage
Cloudflare now explicitly recommends SQLite-backed Durable Objects for new namespaces. That is the modern default.
B) Durable Object migrations are about class/runtime mapping, not SQL schema migrations
This trips people up. Durable Object migrations in Wrangler are required when you:
- create a new class,
- rename a class,
- delete a class,
- or transfer a class.
That is different from your own internal SQL schema evolution inside the object.
C) Deleting Durable Object storage is not just “drop a table”
Cloudflare’s docs are explicit: if you want a Durable Object to fully cease to exist and stop being billed for storage, you must use:
deleteAlarm()if alarms were useddeleteAll()for storage
Just deleting rows or dropping tables is not enough because metadata can remain.
That is a subtle but very real cost/cleanup footgun.
D) D1 read replication is not automatic magic
If you do not adopt the Sessions API flow, you do not really get the intended read replication behavior.
This is the kind of thing that leads to false expectations like:
“we enabled replication but latency barely changed.”
E) D1 database size limits shape architecture
Cloudflare documents D1 as a scale-out-across-many-smaller-databases product, with per-database limits such as:
- 10 GB max per DB on Workers Paid
- 500 MB max per DB on Free
So if your mental model is “one endlessly growing monolithic database,” D1 is the wrong emotional model. It wants deliberate partitioning when data grows.
10) Decision matrix
| Situation | Better default |
|---|---|
| Standard app data with normal SQL tables | D1 |
| One authoritative coordinator per entity | Durable Objects |
| Cross-entity admin queries | D1 |
| WebSocket room/session state | Durable Objects |
| Read-heavy global users close to replicas | D1 |
| Per-document / per-room / per-user hot mutable state | Durable Objects |
| Need alarms/timers tightly coupled to entity state | Durable Objects |
| Need a managed DB more than a distributed primitive | D1 |
| Natural shard key is obvious and important | Durable Objects |
| Broad filtering / ranking / listing across all data | D1 |
11) Patterns that work well
Pattern A — Durable Objects for hot coordination, D1 for product views
Use Durable Objects for:
- room state,
- presence,
- in-flight game/session state,
- conflict-sensitive edits,
- ordered command application.
Use D1 for:
- account metadata,
- searchable/global lists,
- reporting,
- admin UI,
- durable product tables that need broad query access.
This is often the best “real app” split.
Pattern B — One D1 DB per tenant, plus Durable Objects for the hottest paths
If tenants are natural isolation boundaries:
- D1 per tenant for relational product data,
- Durable Objects for the subset of flows requiring strict per-entity serialization.
This avoids forcing every request through actor-style routing while still giving you coordination where it matters.
Pattern C — Durable Objects only, but only when the world is truly entity-centric
This can work for systems like:
- multiplayer sessions,
- collaborative artifacts,
- device coordinators,
- queue/lease ownership,
- strongly ordered per-key workflows.
But be honest: if you later need broad reporting/search across all objects, you will likely add D1 or another index/query layer anyway.
12) Anti-patterns
Anti-pattern 1: single global Durable Object
This is the classic self-own. You recreated a giant bottleneck and threw away the scale-out model.
Anti-pattern 2: using Durable Objects when you mostly want ad-hoc SQL
If your main need is “let me query across everything cleanly,” D1 is the saner tool.
Anti-pattern 3: using D1 to model highly contentious per-entity command serialization
You can force it, but you are making the coordination problem harder than it needs to be.
Anti-pattern 4: enabling D1 read replication but not propagating session/bookmark context
Then you do not actually get the consistency/latency model you think you bought.
Anti-pattern 5: treating Durable Object lifecycle cleanup casually
If you create lots of ephemeral objects, understand storage cleanup rules or you will accumulate billable leftovers.
13) If I were choosing for common workloads
Realtime multiplayer / collaboration
Prefer Durable Objects first. The coordination model matches the problem. Use D1 later for analytics, search, account metadata, or cross-room views.
SaaS app with dashboards, users, teams, content
Prefer D1 first. Use Durable Objects only for the small set of contention-heavy realtime flows.
Per-user notebook / agent / workspace state
Usually:
- D1 if the product is mostly CRUD/search/list/report,
- Durable Objects if each workspace has live coordination, timers, or strong ordering semantics.
Event-sourced or command-driven workflows
If commands must be serialized per entity, Durable Objects are very attractive. D1 can still be the broader read/query layer.
14) A brutally practical selection heuristic
Ask these in order:
What is my atom of coordination?
- If obvious and important → Durable Objects gets stronger.
Do I need frequent cross-entity SQL?
- If yes → D1 gets stronger.
Is the hot path contention-heavy on a single entity/key?
- If yes → Durable Objects gets much stronger.
Is the workload globally read-heavy with ordinary relational access patterns?
- If yes → D1 gets much stronger.
Do I need timers/alarms/live connections coupled to state?
- If yes → Durable Objects.
Am I secretly designing a global singleton?
- If yes → stop and redesign.
If you cannot clearly answer what your coordination atom is, that is usually a sign you should start with D1, not Durable Objects.
15) Evidence anchors / further reading
Official docs and Cloudflare materials:
- Durable Objects overview: https://developers.cloudflare.com/durable-objects/
- Rules of Durable Objects: https://developers.cloudflare.com/durable-objects/best-practices/rules-of-durable-objects/
- Access Durable Objects storage: https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/
- SQLite-backed Durable Object Storage API: https://developers.cloudflare.com/durable-objects/api/sqlite-storage-api/
- Durable Objects migrations: https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/
- Cloudflare Workers storage options: https://developers.cloudflare.com/workers/platform/storage-options/
- Cloudflare D1 overview: https://developers.cloudflare.com/d1/
- D1 global read replication: https://developers.cloudflare.com/d1/best-practices/read-replication/
- D1 limits: https://developers.cloudflare.com/d1/platform/limits/
- Cloudflare blog — Zero-latency SQLite storage in every Durable Object: https://blog.cloudflare.com/sqlite-in-durable-objects/
Final take
If you reduce the choice to “which Cloudflare SQL thing should I use?”, you will make bad architecture decisions.
The real choice is:
- managed relational database → D1
- serialized stateful compute per identity → SQLite-backed Durable Objects
Use D1 when your app wants a database. Use Durable Objects when your app wants an owner for each piece of state. Use both when your hot path and your query path are different systems pretending to be one product.
That last case is common. And usually correct.