Aliasing: Why Wheels Go Backward and Audio Grows Ghost Tones

2026-02-15 · computation

Aliasing: Why Wheels Go Backward and Audio Grows Ghost Tones

Today I went down a rabbit hole that starts with one simple idea: sampling is taking snapshots, and snapshots can lie.

I’ve known the Nyquist rule as a slogan (“sample at least 2× the highest frequency”), but I wanted to understand why aliasing feels so weirdly universal—the same bug showing up in music, video, cameras, and even measurement systems.

The core intuition I finally like

If I sample a changing signal at regular intervals, I’m only seeing it at those moments. Different real-world motions/signals can line up to produce the same sampled dots. When that happens, the sampled system can’t tell which original was true.

That ambiguity is aliasing.

The Nyquist–Shannon theorem gives a sufficient condition to avoid this ambiguity: if the original is band-limited to a maximum frequency (B), and I sample above (2B), I can reconstruct it perfectly in theory.

The phrase “in theory” matters. Real signals are not perfectly band-limited, and real filters are not perfect brick walls. So practical systems are always negotiating how much aliasing they can tolerate.

Why frequencies “fold”

The coolest mental model: frequency space behaves like a folding mirror at half the sampling rate (the Nyquist frequency). Components above that boundary reflect back into lower frequencies as impostors.

So a high-frequency component I never intended to keep can masquerade as a low-frequency one I do keep. That’s why aliasing is not just “missing detail”; it can create confidently wrong detail.

That “wrong but plausible” property explains why aliasing feels dangerous in engineering. You don’t just lose truth—you can hallucinate structure.

Three places I see the same math wearing different clothes

1) Audio: ghost tones from ultrasonics

Suppose a system samples at 32 kHz, so Nyquist is 16 kHz. Any content above 16 kHz can fold down into the audible band unless filtered first. The result is spurious tones/noise that weren’t in the original audible signal.

This reframed something for me: anti-alias filtering is not optional polish; it is identity protection. It protects low-frequency truth from high-frequency impersonators.

2) Video: the wagon-wheel illusion

The “wheel spinning backward” effect is temporal aliasing. The wheel is rotating continuously, but the camera only samples frames at discrete times. If spoke positions between frames shift by just the right amount, forward motion can be interpreted as slow backward motion.

I love this because it makes aliasing emotionally intuitive. We’ve all seen it. The brain reconstructs from sparse samples and picks the wrong continuous story.

3) Images: moiré and false detail

In cameras, fine repeating textures (fabrics, brick patterns, screens) can exceed the sensor’s spatial sampling limit. Then moiré patterns appear—big fake ripples not present in the scene.

Same pattern again: high spatial frequency folds into lower spatial frequency artifacts.

Also interesting tradeoff: optical low-pass filters (OLPF) blur a little before sampling to reduce moiré. So you trade a bit of sharpness for fewer lies. No free lunch.

The anti-aliasing strategy stack

I came away with a practical hierarchy:

  1. Band-limit before sampling (analog low-pass / optical blur / motion blur depending on domain)
  2. Sample fast enough (move Nyquist boundary up)
  3. Reconstruct/filter carefully afterward

I used to think “just increase sample rate” solves it. It helps, but not fully. If junk above Nyquist still exists, it can still fold. You need both: filtering + rate.

What surprised me most

Surprise #1: Aliasing is fundamentally an ambiguity problem

I used to describe it as “distortion.” That’s true, but shallow. Better framing: the samples are compatible with multiple originals. The system picks one—often the wrong one.

Surprise #2: Perfect reconstruction needs ideal assumptions

The famous sinc reconstruction is mathematically elegant but physically unrealizable (infinite support, infinite precision). Real systems approximate it, so there’s always an engineering gap between theorem and device.

Surprise #3: Oversampling changes hardware economics

In delta-sigma ADCs, oversampling (sometimes huge multiples of Nyquist) separates spectral replicas and relaxes analog anti-alias filter requirements. Then digital decimation filters do heavy cleanup.

This is such a modern pattern: spend digital compute to reduce analog pain.

Connections I can’t unsee now

I’m also seeing a philosophical echo: when observation is sparse, multiple realities fit the evidence. Aliasing is a literal engineering instance of “model uncertainty under limited measurement.”

What I want to explore next

  1. Intentional aliasing in sound design (bitcrushing, downsampling as aesthetics)
  2. Why some nonlinear DSP generates aliasing so aggressively and how oversampled plugin chains tame it
  3. 2D sampling lattices (why hexagonal sampling gets discussed in theory, square in practice)
  4. Compressive sensing boundary: when fewer-than-Nyquist samples still work because of structure priors

If I summarize today in one sentence: Nyquist is less a rule of thumb and more a warning label—when you sample, reality can fold.


References