Queue-Survival + State-Dependent Impact-Kernel Slippage Playbook

2026-03-01 · finance

Queue-Survival + State-Dependent Impact-Kernel Slippage Playbook

Date: 2026-03-01
Category: research (execution / slippage modeling)

Why this exists

Many production slippage stacks split into two disconnected worlds:

  1. Passive world: queue position, fill probability, cancel risk
  2. Aggressive world: impact curve, urgency, participation caps

That split causes fragile behavior exactly when it matters (open, close, news bursts, thin books). A passive model says “wait,” impact model says “take,” and the router oscillates.

This playbook proposes a single decision layer that combines:


1) System blueprint

At each decision tick (e.g., 200ms~1s), evaluate candidate actions:

For each action, estimate:

  1. Expected slippage (mean)
  2. Tail slippage (q90/q95)
  3. Completion risk (deadline miss / residual inventory)

Then choose the minimum risk-adjusted cost under hard constraints.


2) Passive side: queue survival model

Treat a placed passive order as a survival process with horizon (H):

[ P(\text{fill by }H)=1-S(H), \quad S(H)=\exp\left(-\int_0^H h(u\mid x)du\right) ]

Passive expected cost decomposition

[ C_{passive}=P_{fill}\cdot C_{markout}+\left(1-P_{fill}\right)\cdot C_{chase} ]

This is key: passive is not “free.” Non-fill has real optionality cost.


3) Aggressive side: state-dependent impact kernel

For aggressive child order flow (q_t), model transient impact with regime-conditioned kernel:

[ I_t = \sum_{\tau \le t} G_{s_t}(t-\tau),q_\tau ]

Practical parameterization

Use a 2-component kernel for stability:

[ G_s(\Delta)=a_s e^{-\Delta/\tau_{fast,s}} + b_s e^{-\Delta/\tau_{slow,s}} ]

Estimate per symbol bucket, then shrink toward sector/global priors for sparse names.


4) Joint action objective (single score)

For candidate action (a):

[ J(a)=\underbrace{\mathbb{E}[C\mid a]}{mean} + \lambda{tail}\underbrace{Q_{0.95}(C\mid a)}{tail} + \lambda{sla}\underbrace{R_{deadline}(a)}_{completion risk} ]

Subject to:

This makes passive-vs-aggressive a continuous tradeoff, not a hard rule switch.


5) Training pipeline

Step A: Label construction

Per child decision event, store:

Step B: Fit queue survival

Step C: Fit impact kernels by regime

Step D: Tail model


6) Online adaptation

Use lightweight daily + intraday updates:

Trigger AMBER mode if any of these drift too far from baseline:


7) Controller state ladder

Transition gates should be deterministic and auditable.


8) Validation protocol (what to prove before promotion)

Offline (walk-forward)

Shadow live

Canary


9) Minimal implementation spec (v1)

  1. Add event schema with queue percentile + markout labels
  2. Build survival model endpoint (/fill-prob?h=...)
  3. Build regime-conditioned impact endpoint (/impact-forecast)
  4. Add action scorer (/execution-score) returning mean/q95/SLA components
  5. Wire router to consume top-1 action + reason codes
  6. Log every decision tuple for replay and attribution

If scope is tight, start with one symbol cluster and one venue, then expand.


10) Failure modes and mitigations


11) What this gives in practice


References