You Can Only See One Thing At A Time

Artificial Noodles ·

Inspired by Attention on Wikipedia — specifically Eriksen & St. James’ zoom lens model (1986)

Built with Canvas 2D · Outfit Variable Font (100-900) · Spring Physics

Techniques Gaussian influence gradients · Underdamped second-order springs · Per-frame font measurement · Adaptive layout

Direction Typography as attention field — the cursor is the lens

Result Words compete for your gaze; you can only truly see one at a time

The Story

In 1986, psychologists Charles Eriksen and James St. James published a model of visual attention that retired the old “spotlight” metaphor. The spotlight said attention was binary — a region is either lit or dark. Eriksen’s zoom lens model said something more unsettling.

Attention, they argued, operates like a variable-power lens. You have a fixed pool of processing resources. Focus them narrowly and the thing you’re looking at gets dramatically enhanced — but everything outside the focus doesn’t just fade to background. It gets actively suppressed. Later brain imaging confirmed this: BOLD responses in visual cortex areas V1 through V4 measurably drop when the attention focus widens. Your brain dims the world to brighten the thing you’re looking at.

The uncomfortable truth: attention isn’t something you give. It’s something you take from everything else.


The Take

Words stacked vertically on a dark field. FOCUS, NOISE, BLINK on a widescreen monitor. Rotate your phone and GLAZE, FAINT, GHOST, DRIFT join them — the word count adapts to your screen, keeping each band in roughly 1:3 proportions. Every word fills the full viewport width. In neutral state they all look the same: warm dark gray, medium weight, equal height. Balanced. Undifferentiated.

Move the cursor over one and the zoom lens fires. The attended word inhales — heavier, taller, brighter, the letterform nearest your cursor stretching wider than the rest. The unattended words exhale — compressing, thinning, darkening toward the background. The transition isn’t a switch. It’s a gradient: strongest at the word’s center, falling off smoothly through the Gaussian curve, the springs overshooting and settling with a gooey, elastic feel.

Every word is self-referential. Attend to FOCUS and the others lose theirs. Attend to NOISE and it becomes your signal — the very thing you were filtering out now fills your vision. Attend to BLINK and you notice how quickly attention moves. Attend to GLAZE and you feel your eyes glazing past the rest. Each word names what happens to the other words when you look at it.

On a phone in portrait, six or seven words compete. On a widescreen monitor, three. More words means more suppression — more of the field dimmed to feed the one word you chose. The experience is harder on a tall screen. That’s the point.

You can never see them all at full intensity. The act of attending is the act of suppressing. This isn’t an illustration of the zoom lens model — it is the zoom lens model.


The Tech

Variable Font Weight as Visual Metaphor

Outfit spans weight 100 (hairline) to 900 (ultra black). The focused word renders at 900 — massive, commanding. Suppressed words drop to 150 — barely-there whispers of letterform. That 750-unit range is the entire visual vocabulary of the piece.

Font size recalculates every frame. Heavier glyphs are wider at the same point size, so the font size has to shrink to keep the word filling the viewport width. This produces a breathing effect: the text inhales (taller, heavier) when focused, exhales (shorter, thinner) when released. A measureText() call at a reference size gives the cap-height ratio; the target band height divided by that ratio gives the frame’s font size. Float precision throughout — no integer rounding — keeps the breathing continuous rather than stepping.

Per-Letter Horizontal Stretch

The letter nearest the cursor swells wider than its neighbours. A Gaussian centered on mouseX, with sigma at 12% of viewport width, computes a stretch factor per letter: up to 2.2x at the cursor, down to 0.7x at the edges. The factors are normalized to average 1.0, so the total word width stays constant — the stretch redistributes space within the word rather than changing its footprint. This makes the cursor feel like a loupe sliding across the letterforms.

Gaussian Influence Gradients

Each word’s influence is a Gaussian centered on its current band center: exp(-(d² / 2σ²)) where d is the vertical distance from cursor to band center and σ is 18% of viewport height. The raw influences normalize to sum to 1.0 — a zero-sum system where one word’s gain is every other word’s loss.

Sigma controls the lens sharpness. Too wide and you can never fully focus. Too narrow and the transition is binary. 18% puts the crossover at the band boundaries: smooth but decisive.

Underdamped Second-Order Springs

Every animated property — band height, font weight, color, stretch intensity — runs on an independent spring (ω = 28 rad/s, ζ = 0.55). Underdamped means overshoot: move the cursor quickly and the departing word compresses past its target before bouncing back, the arriving word swells past its peak before settling. That overshoot is the gooey, viscous feel. The springs have inertia. The system has weight.

The update is the standard second-order ODE: acceleration = ω² × (target - position) - 2ωζ × velocity. Delta time capped at 1/30s prevents spring explosion on tab-switch resume.

Adaptive Word Count

Word count adapts to viewport aspect ratio: clamp(round(3 × H / W), 3, 7). A 16:9 desktop gets 3. A phone in portrait gets 6. The layout parameters scale: focused height = 0.35 + 0.6/N, suppressed height fills the remainder equally across N−1 words. At 7 words, each suppressed band gets ~8% of the viewport — thin, but visible enough to read when attention arrives.

On resize (or phone rotation), the word pool rebuilds and the entry animation replays. The pool is ordered by conceptual weight: FOCUS → NOISE → BLINK → GLAZE → FAINT → GHOST → DRIFT. Taller screens don’t just get more words — they get deeper into the vocabulary of inattention.


Experience: The Lens


This blog post was AI generated with Claude Code. Authored by Artificial Noodles.