Order-Book Convexity + Liquidity Elasticity Slippage Playbook
Date: 2026-02-28
Category: research (execution / slippage modeling)
Why this exists
Most slippage models treat visible depth as a single scalar (top-of-book depth or spread).
In production, that misses two structural realities:
- Depth shape matters: same total depth can have very different near-touch curvature.
- Refill behavior matters: some books refill quickly after a sweep, others stay hollow.
This playbook models slippage with two coupled state variables:
- Order-book convexity (shape of available depth by price distance)
- Liquidity elasticity (speed/strength of post-trade refill)
The goal is not prettier TCA charts. The goal is better online action selection (post, improve, take, split, or pause).
1) Core modeling idea
Represent expected child-order slippage as:
[ S_t(q) = S^{\text{instant}}_t(q; C_t) + S^{\text{persistence}}_t(q; E_t, H_t) ]
- (q): candidate child size
- (C_t): convexity state (book shape)
- (E_t): elasticity state (refill responsiveness)
- (H_t): short-horizon pressure state (flow imbalance / cancel shock)
Interpretation:
- Instant term: cost to walk the current book.
- Persistence term: residual cost if impact does not mean-revert quickly.
2) Convexity state ((C_t))
2.1 Depth-shape fit
For each side of the book, fit cumulative depth curve:
[ D_t(\Delta p) = a_t (\Delta p)^{\gamma_t} ]
- (\Delta p): distance from touch (ticks or bps)
- (D_t): cumulative executable volume up to (\Delta p)
- (\gamma_t): convexity exponent
Practical reading:
- (\gamma_t < 1): concave-ish (more depth near touch) → lower marginal impact
- (\gamma_t > 1): convex (thin near touch, depth deeper away) → higher marginal impact
2.2 Size-to-price mapping
Given target size (q), implied walk distance from inverse curve:
[ \Delta p_t(q) \approx \left(\frac{q}{a_t}\right)^{1/\gamma_t} ]
Instant expected impact scales with this walk distance, adjusted for queue priority and spread regime.
3) Elasticity state ((E_t))
3.1 Refill ratio
After consuming book volume, track refill over horizon (\tau):
[ R_{t,\tau} = \frac{\text{added depth over }[t,t+\tau]}{\text{consumed depth at }t + \epsilon} ]
Define elasticity proxy (bounded):
[ E_t = \text{clip}(\log(1+R_{t,\tau}), 0, E_{\max}) ]
Higher (E_t) means faster replenishment and lower persistent impact.
3.2 Cancel-adjusted elasticity
Raw refill can be fake if cancel bursts dominate. Use net refill:
[ R^{net}_{t,\tau} = \frac{\text{adds} - \text{cancels}}{\text{consumed} + \epsilon} ]
In fragile regimes, always prefer net elasticity over gross.
4) Persistence / markout term
Model short-horizon markout penalty with (E_t) and pressure (H_t):
[ S^{\text{persistence}}_t(q) = \beta_0 + \beta_1 q + \beta_2 (1-E_t) + \beta_3 H_t + \beta_4 q(1-E_t) ]
Where (H_t) may include:
- microprice drift
- aggressive trade imbalance
- cancel-to-trade ratio burst
- spread-widening transition flag
This captures the operational truth:
Same sweep size hurts more when refill is weak and tape pressure is one-sided.
5) Data contract (minimum viable)
Unit: child-order decision + realized outcome.
Required fields:
ts,symbol,side,venue,session_bucketarrival_mid,decision_price,fill_px,fill_qty,slippage_bps- L2/L3 snapshot features near decision time
spread_bps,tick_size,top_depth,depth_l1..lN- order-flow:
aggr_buy_vol,aggr_sell_vol,cancel_vol,add_vol markout_1s,markout_5s,markout_30s- execution controls:
participation,urgency,time_left_sec
Guardrails:
- point-in-time snapshot only (no leakage)
- keep no-fill / partial-fill records (avoid fill-only bias)
- deterministic IDs for replay
6) Regime map for control
Build a simple 2D regime map:
- X-axis: convexity (\gamma_t)
- Y-axis: net elasticity (E_t^{net})
Four practical regimes:
Flat+Elastic (low (\gamma), high (E))
- book tolerant, refill strong
- allow larger passive clips, occasional controlled take
Flat+Inelastic (low (\gamma), low (E))
- initial fill easy, residual impact sticky
- reduce repeat aggression; stagger slices
Convex+Elastic (high (\gamma), high (E))
- near-touch thin but recovery decent
- smaller clips, higher cadence acceptable
Convex+Inelastic (high (\gamma), low (E))
- worst tail-risk state
- hard POV cap, wider cooldown, SAFE mode candidate
7) Controller coupling (action layer)
For each action candidate (a):
[ J_t(a)=\mathbb{E}[S_t|a] + \lambda_{opp},\text{OppCost}(a) + \lambda_{tail},\text{CVaR}_{95}(S_t|a) ]
Pick action with minimal (J_t(a)) under constraints:
- participation cap
- remaining time budget
- venue/session restrictions
- kill-switch state
Action set example:
PASSIVE_JOINPASSIVE_IMPROVEMID_PEGLIMITED_TAKEPAUSE_REPRICE
8) Calibration protocol
Offline
- rolling walk-forward by day/session
- calibration of q50/q90/q95 slippage
- CVaR95 delta vs baseline model
- completion-rate impact (do not optimize cost by underfilling)
Online
- monitor convexity-fit residuals (shape drift)
- monitor elasticity forecast error
- trigger AMBER state when q95 miss-rate exceeds threshold
Suggested defaults:
- convexity fit window: last 30-120 seconds of snapshots
- elasticity horizon (\tau): 3s / 10s dual horizon
- robust loss: Huber or quantile for tail stability
9) Failure modes
Using gross refill only
- hides cancel storms; overestimates resilience.
Single-horizon elasticity
- 1s refill may look healthy while 10s persistence remains toxic.
Ignoring tick-size regime
- convexity interpretation differs across tick/price buckets.
Fill-only training set
- systematically underprices real implementation shortfall.
No action integration
- accurate forecasts without policy coupling yield no PnL benefit.
10) Minimal implementation sketch
for each decision time t:
snapshot = get_lob_snapshot(t)
flow = get_recent_flow_features(t)
C_t = fit_convexity(snapshot) # gamma_t, a_t
E_t = estimate_net_elasticity(t, tau) # refill-adjusted
H_t = pressure_state(flow, microprice, spreads)
for action a in ACTIONS:
q_a = proposed_child_size(a)
S_inst[a] = instant_cost_from_inverse_depth(q_a, C_t)
S_pers[a] = persistence_model(q_a, E_t, H_t)
S_total[a]= S_inst[a] + S_pers[a]
J[a] = E[S_total[a]] + opp_penalty(a) + tail_penalty(a)
action = argmin_feasible(J)
execute(action)
log(t, C_t, E_t, H_t, action, realized_slippage, markouts)
11) What “good” looks like
- lower q95/q99 slippage overshoot in fragile books
- fewer false-aggressive decisions during refill collapses
- stable completion rates under fixed time budget
- interpretable regime transitions for human review
If those hold, convexity+elasticity is doing what static depth metrics cannot: it explains not only where liquidity is, but how it behaves after you touch it.