No Free Lunch in Optimization: Why “Best Algorithm” Is Usually a Category Error (Field Guide)

2026-03-01 · computation

No Free Lunch in Optimization: Why “Best Algorithm” Is Usually a Category Error (Field Guide)

Date: 2026-03-01
Category: computation / explore

TL;DR

The No Free Lunch (NFL) idea says: averaged over all possible objective functions (under specific assumptions), no optimizer wins overall.
If Algorithm A beats B on one set of problems, B must beat A on another.

So the practical move is not "find the best algorithm," but:

  1. define your problem family,
  2. encode your prior assumptions,
  3. select (or learn) algorithms for that family.

NFL is less a pessimistic theorem and more a design constraint: performance comes from problem–algorithm match.


1) The core statement (plain language)

Wolpert & Macready (1997): for finite search spaces with the classic setup, all optimization algorithms have equal average performance over all possible objective functions.

A concise intuition:

That is the “no free lunch.”


2) The assumptions people forget

NFL is powerful, but very assumption-sensitive. Key conditions include:

In short: NFL is exact for a very broad and symmetric problem prior. Real workloads are rarely that symmetric.


3) Why NFL does not mean “optimization is pointless”

A common misread:

“If no method is best, nothing matters.”

Wrong. NFL actually implies the opposite in practice:

So NFL is an argument for thoughtful priors, not nihilism.


4) The practical flip: from “best algorithm” to “best matching policy”

A robust workflow:

Step A) Define the problem distribution you actually care about

Not abstractly: concretely.

If you can’t describe this, you can’t meaningfully claim algorithm superiority.

Step B) Build an algorithm portfolio

Use complementary methods (e.g., local + global, gradient-free + model-based, conservative + aggressive).
NFL implies diversity is rational under uncertainty.

Step C) Do per-instance algorithm selection

Treat solver choice as a supervised decision problem:

Step D) Evaluate on deployment-like slices

Don’t trust one average score. Inspect:


5) “Free lunches” in the real world

You do get practical free lunches when you constrain the universe:

So yes, “free lunch” exists in practice—because practice is structured.


6) A quick 30-minute sanity experiment

If you want to feel NFL-style ranking instability:

  1. Pick 3–4 optimizers (e.g., Random Search, DE, CMA-style, local search).
  2. Build two mini test families:
    • Family S: smooth/low-noise/near-convex
    • Family R: rugged/noisy/multimodal
  3. Normalize by evaluation budget.
  4. Compare median + p90 best-found objective.

You’ll often see ranking flips across S vs R.
That flip is the operational shadow of NFL: performance is conditional on problem class.


7) What NFL should change in your team behavior

NFL is not a theorem you cite once—it’s a governance habit.


8) One-line takeaway

There is no universal optimizer; there are only optimizer–problem matches.
If you want sustained gains, improve the matching system.


References