A biologically-inspired algorithm that simulates retinal edge detection and cortical compression to find where icons look centered. Backed by human perception data from 1,000+ icons.
Community
Think optical centering should be in CSS? Leave a comment and your email to get notified when the open dataset is published.
Thanks! Your comment is submitted and will appear after review.
The Problem
When you place an asymmetric icon inside a button using align-items: center, it looks wrong.
The visual weight of the shape doesn't match its bounding box. Designers nudge pixels manually. Every time.
Designers adjust centering by eye on every icon, every button, every component. There is no standard.
CSS centers the bounding box, not the visual mass. For a play icon, the geometric center is left of where your eyes expect it.
Font metrics have ascender, descender, baseline. Icons have nothing. There is no built-in way to describe where an icon "looks" centered.
The Science
Our algorithm doesn't guess where things look centered. It simulates how your visual cortex actually processes shapes — from retinal edge detection to cortical compression. Each stage is grounded in neuroscience.
Your retina has ON-center/OFF-surround receptive fields that enhance edges and suppress uniform regions. We simulate this with a Difference of Gaussians (DoG) filter.
DoG = G(σ=1.0) − G(σ=1.6)V1 cortex neurons respond with a compressive power law. Dense, high-contrast regions get compressed while edges gain relative emphasis. This is why you perceive contours more than fill.
w' = w0.7Your perceived center of a shape is dominated by its boundary, not its interior mass. We compute an edge-weighted centroid using a Sobel operator on the preprocessed weight map.
Edge centroid: 40% weightThe convex hull of visible pixels defines the shape's outer envelope. Its centroid captures the structural balance that quick glances perceive — the "at a glance" impression.
Hull centroid: 30% weightThe Lateral Occipital Complex detects symmetry at ~220ms. For symmetric shapes, the perceived center snaps to the symmetry axis intersection. We scan 36 angles to find the dominant axis.
Symmetry center: 30% weightHumans consistently perceive the "center" of a shape as slightly above its geometric midpoint. We apply a 3.5% upward shift — a well-documented perceptual bias.
cy −= height × 0.035
Every icon passes through this pipeline at build time. The computed offset is a single (dx, dy) pair in pixels.
The Research
We are running a perceptual centering study with real participants. Each person adjusts the position of a shape until it "looks centered" inside a container.
Method of Adjustment study. Participants positioned icons until they "looked centered." Data used to validate the biologically-inspired v2 pipeline (RMSE: 2.99px, r=0.585).
Two-alternative forced choice tests on real icons from Lucide, Feather, Heroicons, Bootstrap Icons, and Phosphor. The biologically-inspired v2 model was validated against human perceptual judgments. PSE = 0.745 (humans prefer 74.5% of model correction).
Method of Adjustment study: icons start near the model's predicted center. Participants nudge each icon until it looks perfectly centered. Measures per-icon model accuracy at pixel precision.
A biologically-inspired engineering approximation — not a neural simulation, but a pipeline that captures the key perceptual mechanisms that determine where a shape "looks" centered:
The study data and raw trial recordings will be published as an open dataset once collection is complete. This will be the first public dataset specifically focused on perceptual centering judgments for icons.
The Proposal
Like font metrics (ascender, descender, baseline), icons should carry intrinsic centering metadata. Computed at build time, applied at runtime.
Stay Updated
We will email you once when the open dataset and Phase 1 results are published. Nothing else.
No spam. One email when results are out. Unsubscribe anytime.
You are on the list. We will let you know.