The Crosstalk
The story behind The Crosstalk
Inspired by Crosstalk on Wikipedia
Built with Three.js WebGPURenderer · SSGINode · TRAANode · TSL (Three Shading Language)
Techniques Screen-Space Global Illumination · Temporal Reprojection Anti-Aliasing · MRT Pass · Diffuse Color Bleeding
Direction Nothing stays in its channel
Result A room where white was never white
The Story
Crosstalk is the fundamental limit of isolation. In telecommunications, it means a signal transmitted on one wire bleeds into the adjacent wire — not through any defect, but through the physics of electromagnetic coupling. The closer the wires, the stronger the leakage. Engineers measure crosstalk in decibels of isolation, striving for perfect separation. They never achieve it. At some level, every channel contaminates every other channel.
The principle extends beyond electronics. In optics, light from one surface bounces to another, carrying color with it. A red wall tints the white floor beside it pink. A blue ceiling washes everything below in cool light. This is indirect illumination — the second, third, fourth bounce of photons through a scene. Direct lighting is the signal. Indirect lighting is the crosstalk. And in any enclosed space, the crosstalk is everywhere.
Computer graphics spent decades chasing this effect. Global illumination algorithms — radiosity, path tracing, photon mapping — simulate light transport to produce the subtle color bleeding that makes rendered scenes feel real. Until recently, this was exclusively an offline process: minutes or hours per frame. Screen-Space Global Illumination brings it to real-time.
The Take
A white room. Three monoliths — red, blue, yellow — standing on a white floor between white walls. At first glance: a minimal gallery, clean and colorless except for the objects.
Then you notice. The floor near the red monolith isn’t white. It’s faintly pink. The wall behind the blue one carries a cool wash. Between the red and blue, the floor is neither — it’s violet, a color that exists on no surface in the room. It exists only in the light that bounced between them.
Drag a monolith. The entire room’s color signature shifts in real-time. Push the red and blue together and the violet intensifies — the floor between them saturates with mixed light. Push all three into a cluster and the room drowns: every white surface becomes a canvas for secondary color, and the original white is gone entirely.
Pull them apart. The room recovers — slowly, as the light paths lengthen and weaken. But even at maximum separation, traces remain in the corners. Perfect isolation is physically impossible.
The Tech
WebGPU + Three.js TSL Pipeline
This is the first experience built on WebGPU, using Three.js’s WebGPURenderer with the node-based TSL (Three Shading Language) material system. The renderer initializes asynchronously (await renderer.init()) and provides the compute infrastructure that SSGI and TRAA require. The entire post-processing pipeline is expressed as a node graph rather than a chain of shader passes.
Multi-Render Target (MRT) Pass
A single scene render outputs four channels simultaneously via MRT: the beauty pass (output), diffuse albedo (diffuseColor), view-space normals encoded as color (directionToColor(normalView)), and per-pixel velocity vectors (velocity). This is the foundation — every downstream effect reads from these textures rather than re-rendering the scene.
The normal and diffuse textures are stored as UnsignedByteType (8-bit per channel) for bandwidth optimization. View-space normals are encoded by mapping the [-1, 1] direction to [0, 1] color space, then decoded back to directions via colorToDirection() when SSGI samples them.
SSGI (Screen-Space Global Illumination)
The SSGINode from Three.js r182 approximates one bounce of indirect lighting using only the screen-space buffers — no scene geometry queries, no ray tracing against a BVH. For each pixel, it marches along hemisphere slices in screen space, sampling the beauty and depth buffers to find nearby surfaces. When it finds a visible surface, it uses that surface’s diffuse color as indirect illumination for the current pixel, weighted by the cosine of the angle and distance attenuation.
The output is a 4-channel node: RGB contains the indirect illumination color (the “global illumination”), alpha contains ambient occlusion. The final composite multiplies the beauty by AO (darkening crevices) and adds the diffuse color multiplied by GI (adding indirect color bleeding).
Key parameters: giIntensity: 30 (3x the default — cranked for dramatic color bleeding), radius: 20 (wide sampling for long-range bleeding), sliceCount: 2 with stepCount: 16 (medium-high quality, 64 samples per pixel per frame).
TRAA (Temporal Reprojection Anti-Aliasing)
TRAANode uses the velocity buffer to reproject the previous frame’s result onto the current frame. Each frame is rendered with a sub-pixel jitter offset on the camera projection matrix; over time, these jittered samples accumulate into a super-sampled result. The velocity buffer tells TRAA where each pixel moved since the last frame, so it can correctly align the history.
This serves two purposes: it eliminates aliasing artifacts on the monolith edges and room geometry, and it temporally denoises the SSGI output (which is inherently noisy at 64 samples per pixel). The combination of SSGI’s spatial sampling with TRAA’s temporal accumulation produces clean, stable indirect illumination at real-time frame rates.
Drag Interaction
Raycasting against the monolith meshes detects click/tap targets. On drag, a secondary raycast against the floor plane (y=0) determines the cursor’s 3D position, with an offset to prevent the monolith from snapping to the cursor center. OrbitControls are disabled during drag and re-enabled on release. Auto-rotation provides gentle camera movement when idle, showcasing the SSGI from different angles.
This blog post was AI generated with Claude Code. Authored by Artificial Noodles.