Phase 3 — Fine-tuning with human perception

Icons deserve optical centering, not geometric.

A biologically-inspired algorithm that simulates retinal edge detection and cortical compression to find where icons look centered. Backed by human perception data from 1,000+ icons.

Take the Study See the Science
Geometric center
Optical center

Community

Add your voice.

Think optical centering should be in CSS? Leave a comment and your email to get notified when the open dataset is published.

Thanks! Your comment is submitted and will appear after review.

Geometric centering looks off.

When you place an asymmetric icon inside a button using align-items: center, it looks wrong. The visual weight of the shape doesn't match its bounding box. Designers nudge pixels manually. Every time.

Manual pixel nudging

Designers adjust centering by eye on every icon, every button, every component. There is no standard.

Bounding box lies

CSS centers the bounding box, not the visual mass. For a play icon, the geometric center is left of where your eyes expect it.

No CSS primitive

Font metrics have ascender, descender, baseline. Icons have nothing. There is no built-in way to describe where an icon "looks" centered.

The Science

How we model the human eye.

Our algorithm doesn't guess where things look centered. It simulates how your visual cortex actually processes shapes — from retinal edge detection to cortical compression. Each stage is grounded in neuroscience.

01
👁

Retinal Edge Detection

Marr & Hildreth, 1980

Your retina has ON-center/OFF-surround receptive fields that enhance edges and suppress uniform regions. We simulate this with a Difference of Gaussians (DoG) filter.

DoG = G(σ=1.0) − G(σ=1.6)
02
🧠

Cortical Compression

Naka & Rushton, 1966

V1 cortex neurons respond with a compressive power law. Dense, high-contrast regions get compressed while edges gain relative emphasis. This is why you perceive contours more than fill.

w' = w0.7
03

Contour Dominance

Proffitt, Cutting & Stier, 1983

Your perceived center of a shape is dominated by its boundary, not its interior mass. We compute an edge-weighted centroid using a Sobel operator on the preprocessed weight map.

Edge centroid: 40% weight
04

Bounding Shape

Andrew, 1979 (monotone chain)

The convex hull of visible pixels defines the shape's outer envelope. Its centroid captures the structural balance that quick glances perceive — the "at a glance" impression.

Hull centroid: 30% weight
05

Symmetry Detection

Sasaki et al., 2005

The Lateral Occipital Complex detects symmetry at ~220ms. For symmetric shapes, the perceived center snaps to the symmetry axis intersection. We scan 36 angles to find the dominant axis.

Symmetry center: 30% weight
06

Vertical Bias

Drain & Reuter-Lorenz, 1996

Humans consistently perceive the "center" of a shape as slightly above its geometric midpoint. We apply a 3.5% upward shift — a well-documented perceptual bias.

cy −= height × 0.035

Full pipeline

Every icon passes through this pipeline at build time. The computed offset is a single (dx, dy) pair in pixels.

Pixels Weight map DoG filter w0.7 compress 3-way blend Vertical bias (dx, dy)
40% Edge centroid (Sobel) 30% Hull centroid 30% Symmetry axis center

References

  1. Marr, D. & Hildreth, E. (1980). Theory of edge detection. Proceedings of the Royal Society B, 207(1167), 187-217.
  2. Naka, K.I. & Rushton, W.A.H. (1966). S-potentials from luminosity units in the retina of fish. Journal of Physiology, 185, 587-599.
  3. Proffitt, D.R., Cutting, J.E. & Stier, D.M. (1983). Perception of the centroid of irregular shapes. Perception & Psychophysics, 33(4), 383-388.
  4. Sasaki, Y. et al. (2005). Symmetry activates extrastriate visual cortex in human and nonhuman primates. PNAS, 102(8), 3159-3163.
  5. Drain, M. & Reuter-Lorenz, P.A. (1996). Vertical orienting control: Evidence for attentional bias. Canadian Journal of Experimental Psychology, 50(2), 181-190.
  6. Heeger, D.J. (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9(2), 181-197.

The Research

Measuring where people actually look.

We are running a perceptual centering study with real participants. Each person adjusts the position of a shape until it "looks centered" inside a container.

Completed Phase 1 — Perceptual Data

Method of Adjustment study. Participants positioned icons until they "looked centered." Data used to validate the biologically-inspired v2 pipeline (RMSE: 2.99px, r=0.585).

36
Participants
2,160
Total trials
100%
Quality pass rate
20
Test icons
Completed Phase 2 — Icon Validation (2AFC)

Two-alternative forced choice tests on real icons from Lucide, Feather, Heroicons, Bootstrap Icons, and Phosphor. The biologically-inspired v2 model was validated against human perceptual judgments. PSE = 0.745 (humans prefer 74.5% of model correction).

41
Participants
5,234
Total trials
20
Test icons
999
Icons processed
5
Icon libraries
Live Phase 3 — Fine Adjustment

Method of Adjustment study: icons start near the model's predicted center. Participants nudge each icon until it looks perfectly centered. Measures per-icon model accuracy at pixel precision.

Participants
Total trials
20
Test icons
63
Trials per session
~8 min
Duration
Take the Study

The model

A biologically-inspired engineering approximation — not a neural simulation, but a pipeline that captures the key perceptual mechanisms that determine where a shape "looks" centered:

/* V2 Pipeline: biologically-inspired optical centering */

retina       = DoG(weight_map, σ=1.0, σ=1.6)   // lateral inhibition
cortex       = retina ^ 0.7                       // V1 compression

center       = 0.4 × edge_centroid(cortex)      // contour dominance
             + 0.3 × hull_centroid(cortex)      // bounding shape
             + 0.3 × symmetry_center(cortex)   // axis snap

optical_center = center + vertical_bias(3.5%)

The study data and raw trial recordings will be published as an open dataset once collection is complete. This will be the first public dataset specifically focused on perceptual centering judgments for icons.

A CSS property for optical centering.

Like font metrics (ascender, descender, baseline), icons should carry intrinsic centering metadata. Computed at build time, applied at runtime.

/* Future CSS */
.icon-button {
  optical-center: auto;
}

/* What the browser would do */
.icon-button {
  optical-center: 2.4px 0.1px; /* x y offset from geometric center */
}

Build-time pipeline

SVG source Rasterize DoG + compress Edge/Hull/Symmetry optical-center: dx dy

Get notified when the dataset drops.

We will email you once when the open dataset and Phase 1 results are published. Nothing else.

No spam. One email when results are out. Unsubscribe anytime.

You are on the list. We will let you know.