Visual Servoing Playbook: IBVS vs PBVS vs 2.5D (Practical Robotics Guide)

2026-03-21 · robotics

Visual Servoing Playbook: IBVS vs PBVS vs 2.5D (Practical Robotics Guide)

Date: 2026-03-21
Category: knowledge

Why this matters

If a robot must align to a target in the real world (grasping, docking, insertion, inspection), open-loop pose plans usually fail in the last centimeters.

Visual servoing closes that gap by using camera feedback continuously.

The hard part is not “use camera feedback.” The hard part is choosing the right servo formulation:

Pick wrong, and you get camera retreat, FOV loss, jitter, singularities, or unstable endgame behavior.


1) Quick mental model

IBVS

PBVS

2.5D / Hybrid


2) Core equations (operator level)

Let image features be s, desired features s*, camera twist v.

IBVS relation

dot(s) = L_s * v

v = -lambda * L_s^+ * (s - s*)

where L_s^+ is pseudo-inverse and lambda > 0 is gain.

For a normalized point feature (x, y) with depth Z, a common block in L_s is:

[ -1/Z,   0,   x/Z,   x*y,   -(1+x^2),   y ]
[   0,  -1/Z,  y/Z,  1+y^2,   -x*y,     -x ]

(Exact sign/frame convention depends on implementation and camera/robot frames.)

PBVS relation

Estimate target pose in camera frame, compute relative transform error T_err to goal, then control twist from translational + rotational error:

v = -lambda * [ t_err ; theta*u ]

Practical note: PBVS quality is dominated by pose-estimation quality (PnP, calibration, latency, outlier rejection).


3) Decision matrix (what to deploy first)

Use IBVS first when:

Use PBVS first when:

Use 2.5D / hybrid when:


4) System architecture that actually works

  1. Perception front-end

    • Track features (corners/lines/markers) with quality score.
    • Reject outliers (RANSAC, temporal consistency, innovation gating).
  2. State filtering

    • Smooth feature/pose estimates (EKF/UKF or simple low-lag filters).
    • Keep filter lag bounded; too much smoothing destabilizes loop phase.
  3. Servo controller

    • Compute L_s (or pose error map) each cycle.
    • Use damped pseudo-inverse near singularities.
    • Apply gain scheduling by depth/error magnitude.
  4. Robot interface

    • Velocity limits, acceleration/jerk limits, watchdog.
    • Hard safety envelope for workspace and joint constraints.
  5. Supervisor

    • Mode switching (SEARCH -> ACQUIRE -> SERVO -> INSERT/GRASP -> HOLD/RECOVER).
    • Visibility guardrails and fallback actions.

5) Tuning workflow (fastest path to stable behavior)

Phase A: static bench

Phase B: low-gain closed loop

Phase C: raise performance carefully

Phase D: stress tests


6) Common failure modes (and fixes)

  1. Camera retreat / weird long path

    • Typical in naive IBVS setups.
    • Fix: feature selection redesign, hybrid/2.5D strategy, trajectory constraints.
  2. Target leaves field of view (PBVS)

    • 3D-optimal path is not visibility-optimal.
    • Fix: add visibility constraints or image-space secondary task.
  3. Jitter near goal

    • Pose noise + high gain + latency.
    • Fix: lower terminal gain, filtered target update, deadband/hysteresis.
  4. Singularity/ill-conditioning

    • Feature geometry degenerates.
    • Fix: damped inverse, feature set diversification, re-acquire strategy.
  5. False stability in sim, failure on robot

    • Unmodeled delay, rolling shutter, actuation saturation.
    • Fix: measure end-to-end delay and include it in control tuning.

7) Metrics to run in production/field tests

Treat these as deployment gates, not “nice to have” charts.


8) Minimal launch checklist


References