The Matryoshka

Artificial Noodles ·

Inspired by Mandelbox on Wikipedia

Built with Pure WebGL2 · GLSL Fragment Shader · Distance Estimation · Raymarching

Techniques Mandelbox Iteration · Box Fold + Sphere Fold · Fold-Count Coloring · Cosine Palettes · Spring-Damped Camera

Direction Fractal as museum — zoom reveals rooms containing different mathematical discoveries

Result An interactive fractal you navigate by clicking, discovering classical fractals hiding inside one equation

The Story

In 2010, Tom Lowe asked a simple question: what if you applied the Mandelbrot escape-time algorithm not to complex numbers but to 3D space, replacing the squaring operation with two geometric folds? The result was the Mandelbox — a fractal defined by just a box fold and a sphere fold, iterated with a single scale parameter.

At scale -1.5, something extraordinary happens. The negative scale introduces a 180-degree rotation on every iteration, and the interplay between the two folds produces the richest possible structure. Mathematicians zooming into this specific parameter discovered visual approximations of fractals that had been independently discovered over a century of mathematics: Menger sponges from 1926, Sierpinski carpets from 1916, Koch snowflakes from 1904, structures resembling Poincare’s hyperbolic geometry from the 1880s.

One equation. Two folding operations. Every classical fractal hiding inside.


The Take

The experience treats the Mandelbox as a museum. You start at the exterior — a luminous triangular form floating in deep indigo, its surface alive with purple, cyan, and magenta from the fold-count coloring. Click anywhere to zoom in. Each zoom level reveals a different kind of structure, identified with a museum-style label: “MENGER SPONGE — 1926, Karl Menger’s cube of infinite holes.” Click deeper. “SIERPINSKI CARPET — 1916.” Deeper still. The geometry transforms at every scale.

The palette shifts as you descend. The exterior is indigo and violet. Mid-range reveals teal and amber. Deep zoom blooms into coral and magenta. At extreme magnification, numerical precision itself starts to dissolve — grain intensifies, colors fragment, the machine reaches its limit. The final label reads: “BEYOND RESOLUTION — The formula is infinite. The machine is not.”


The Tech

Mandelbox Distance Estimation

The core is a distance estimator function that evaluates the Mandelbox at any point in 3D space. For each point, we iterate 12 times (10 on mobile), applying the box fold (clamp(z, -1, 1) * 2 - z) and sphere fold (conditional inversion based on the squared radius relative to two thresholds: 0.25 and 1.0). The scale parameter of -1.5 is applied each iteration along with the original position as a constant offset.

The distance estimate uses the logarithmic formula 0.5 * log(r) * r / dr, where r is the final orbit radius and dr is the accumulated derivative — tracked by multiplying by abs(scale) and adding 1.0 each iteration. Both r and dr are clamped to avoid log(0) and division by zero, which was critical for preventing NaN propagation at extreme zoom levels.

Fold-Count Coloring

Rather than simple orbit traps, the coloring tracks how many folds activate per iteration. Each box fold component that exceeds the threshold adds 1.0 to the count. Inner sphere folds (below the minimum radius) add 5.0 — these are rare, high-energy events that mark the deepest structural features. Outer sphere folds add 2.0. This accumulated fold count, normalized to the maximum possible, drives a cosine palette with phase offsets tuned to suppress green and emphasize cyan, magenta, and violet.

A secondary min-distance orbit trap provides detail variation: min(dot(z, z)) across all iterations captures how close the orbit passes to the origin. The two signals — fold count and min trap — blend through separate cosine palettes and combine for the final surface color. A zoom-dependent phase shift ensures the palette evolves as you descend, so each depth level has a distinct color character.

Raymarching at 80 Steps

The fragment shader casts one ray per pixel. From the camera origin, it marches along the ray direction in steps determined by the distance estimator — each step advances by d * 0.7 (the 0.7 safety factor prevents overshooting thin features). The hit threshold scales inversely with zoom: max(0.00005, 0.0003 / zoom), so precision increases as you magnify.

Surface normals use central finite differences with six DE evaluations per hit point — an expensive but necessary computation for accurate lighting. Three directional lights (key, fill, camera-aligned) provide dimensionality, with ambient occlusion derived from the step count — surfaces that required many steps to reach sit in geometric crevices.

NaN-Safe Deep Zoom

The critical technical challenge was preventing white-screen blowout at high zoom. When the camera enters the fractal interior, the distance estimator can return zero or negative values, and log(0) produces NaN. In GLSL, clamp(NaN, 0, 1) is undefined — it can produce any value including 1.0, turning the entire screen white.

The fix has three layers: (1) NaN-safe DE with max(length(z), 1e-8) and max(abs(dr), 1e-8) preventing the pathological inputs; (2) a start-offset that detects when the camera is inside the fractal (DE(eye) < 0.002) and skips the ray forward past the immediate surface; (3) a GLSL NaN check (x != x) at the end of the shader that replaces any NaN pixels with the background color.

Spring-Damped Camera

The camera uses an orbit model (azimuth/elevation/distance) with exponential spring smoothing: blend = 1 - exp(-4.0 * dt). Click-to-zoom reduces distance by factor 0.62 and shifts the target toward the click direction. At high zoom levels, the target offset diminishes (0.15 / (1 + zoom * 0.06)) to prevent the camera from diving into featureless surface patches — keeping it in the fractal’s corridors where structural detail is richest.

Auto-rotation at 0.08 rad/s activates after 4 seconds of inactivity, and the scale parameter breathes slowly around -1.5 (amplitude ±0.003) to keep the surface subtly alive. Double-click resets the camera to the initial view.

Mobile Adaptation

On screens under 768px: max steps reduce to 64 (from 80), iterations to 10 (from 12), and the device pixel ratio caps at 1.5. Touch events support single-finger drag for orbit, pinch for zoom, and tap for click-to-zoom. The experience remains fully interactive on mobile with these reduced parameters.


Experience: The Matryoshka


This blog post was AI generated with Claude Code. Authored by Artificial Noodles.