Goodhart’s Law: When Proxies Become Targets
TL;DR
A metric is useful as a proxy until optimization pressure turns it into a target. Then teams start optimizing the number instead of the underlying reality.
That is Goodhart’s Law in practice.
If you run trading, ML, product, or ops systems, assume this failure mode is always nearby. Design metrics as instruments with guardrails, not as single-score truth.
1) What Goodhart’s Law means operationally
Classic intuition:
“Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”
Modern paraphrase:
“When a measure becomes a target, it ceases to be a good measure.”
Practical interpretation:
- A metric starts correlated with what you care about.
- You attach incentives, thresholds, or hard optimization to that metric.
- People/algorithms discover shortcut paths that increase the metric while degrading the true objective.
So the issue is not “metrics are bad.” The issue is unbounded optimization of imperfect proxies.
2) Why this happens (mechanism, not slogan)
Every metric has noise, blind spots, and modeling assumptions. Under weak pressure, those flaws are tolerable. Under strong pressure, systems exploit them.
Common mechanisms:
Selection on noise
- Extreme values contain more noise than signal.
- Optimizing only top-ranked items over-selects lucky noise.
Distribution shift from optimization
- Policy changes behavior, which changes data distribution.
- Metric-model relationship learned in old regime no longer holds.
Causal breakage
- Correlates are mistaken for causes.
- Interventions on proxy do not improve the real outcome.
Strategic/adversarial gaming
- Humans and agents adapt to pass the metric check, not the mission.
3) The four useful Goodhart variants (for diagnosis)
A practical taxonomy (after Manheim & Garrabrant):
A) Regressional Goodhart
You optimize an imperfect proxy so hard that you mostly select noise at the tails.
- Symptom: pilot wins vanish in production.
- Example: choosing strategies by best backtest Sharpe without penalizing variance/overfit.
B) Extremal Goodhart
Optimization pushes into regions where the historical proxy→goal relationship no longer applies.
- Symptom: “worked in normal range, broke at scale.”
- Example: growth tactics that improve click rate until spam/fatigue destroys retention.
C) Causal Goodhart
You intervene on a variable that predicted outcome but does not causally produce it.
- Symptom: KPI improves, business outcome flat/down.
- Example: maximizing app opens as a proxy for value delivered.
D) Adversarial Goodhart
Agents strategically manipulate measurement once incentives are known.
- Symptom: sudden KPI jump with suspiciously weak downstream impact.
- Example: vendor/fund/team adjusts behavior right around scoring boundaries.
4) Where this burns real systems
Quant / execution
- Optimizing fill rate or participation alone can worsen implementation shortfall.
- Maker-heavy tactics can inflate passive-fill KPI while increasing adverse selection.
- Backtest slippage improvements disappear when venue behavior adapts.
Product
- Session length rises while user satisfaction/retention falls (doom-scroll equilibrium).
- Ticket resolution time falls by prematurely closing tickets.
ML / ranking
- Offline metric (AUC/NDCG) improves; online utility drops due to feedback loops and selection effects.
- Label proxies drift after policy changes.
Organizations
- Teams learn to pass scorecards (OKR theater) while shipping less durable outcomes.
- Public-sector testing/accountability systems produce teaching-to-the-test dynamics.
5) A fast Goodhart-risk audit (use before trusting a KPI)
For each high-stakes metric, ask:
- Proxy gap: what exactly is unmeasured vs true objective?
- Pressure level: how much bonus/punishment is attached?
- Gaming surface: easiest way to improve number without mission progress?
- Regime dependence: does proxy-goal link hold outside historical range?
- Counter-metrics: what can catch “fake wins”?
- Latency: when do true outcomes appear (days/weeks later)?
- Owner incentives: who benefits from metric movement regardless of truth?
If you cannot answer #3 and #5 clearly, you are likely under-defended.
6) Design patterns that actually reduce Goodhart failures
1) Metric portfolios, not single-score control
Use a basket:
- North-star outcome (hard to game, slower)
- Leading indicators (faster but noisy)
- Integrity/cost constraints (abuse detectors)
Example (execution):
- Outcome: arrival-vs-decision IS (risk-adjusted)
- Leading: spread capture, queue retention, fill quality by regime
- Constraints: toxicity/adverse-selection, cancel-to-trade anomalies, tail-loss budget
2) Thresholds + random audits
When metrics become target gates, add randomized manual/statistical audits. Audits increase cost of gaming and reveal policy-blind spots.
3) Regime-stratified reporting
Always slice by context (volatility, liquidity, cohort, difficulty). Pooled wins are untrustworthy when composition shifts.
4) Holdout reality checks
Maintain policy-invariant holdouts where optimization pressure is lower. If KPI rises only in optimized segment, suspect gaming/overfit.
5) Optimize with explicit penalties
Don’t maximize raw KPI; optimize utility:
- utility = proxy_gain − risk_penalty − integrity_penalty
Add penalties for instability, fragility, and suspicious distribution drift.
6) Rotating targets / moving score functions
If agents can perfectly reverse-engineer scoring, periodically rotate feature weights or switch scoring windows (with governance), while preserving objective intent.
7) Concrete warning signals (“smells”)
- KPI inflects sharply right after incentive or policy announcement.
- Improvements cluster right above thresholds/cutoffs.
- Variance rises while mean improves.
- Downstream outcome lags/diverges from lead metric.
- Wins disappear when independently re-measured.
- Teams resist adding counter-metrics (“this one number is enough”).
Treat these as incident triggers, not dashboard curiosities.
8) Minimal implementation checklist
For each mission-critical KPI:
- Write one-paragraph proxy contract: what it measures, what it misses.
- Define at least 2 counter-metrics (quality + integrity).
- Add regime slices and monitor composition drift.
- Add an anti-gaming audit (random sample or shadow measurement).
- Track lead metric vs delayed true outcome.
- Trigger review if lead improves while outcome/counter-metrics worsen.
- Revisit metric design quarterly after incentive changes.
This is lightweight and catches most early failures.
Closing
Goodhart’s Law is not anti-measurement. It is anti-naive optimization.
Metrics are maps, not territory. As optimization pressure rises, map distortions become strategic terrain. The winning move is not “use fewer metrics,” but use metrics with adversarial humility.
References
- Charles A. E. Goodhart (1975), Problems of Monetary Management: The U.K. Experience.
- Donald T. Campbell (1976), Assessing the Impact of Planned Social Change.
- Marilyn Strathern (1997), “‘Improving ratings’: audit in the British University system.”
- David Manheim & Scott Garrabrant (2019), Categorizing Variants of Goodhart’s Law (arXiv:1803.04585).