I’m Cullen Webber, a inventive full-stack developer primarily based in Perth, Australia, with a ardour for graphics programming and crafting immersive experiences on the net.
This tutorial walks you thru making a fluid X-ray impact in Three.js, leveraging a render pipeline powered by TSL (Three.js Shading Language) and WebGPU.
A WebGL model can also be obtainable within the WebGL department of the GitHub repository (Bloom is kind of completely different).
Breaking Down the Render Pipeline
This impact breaks down into 5 elements. It begins with a canvas-drawn mouse path, which feeds right into a ping-pong fluid simulation that diffuses it. Alongside this, two instanced Three.js scenes, one stable and one X-ray, are rendered to separate textures earlier than a ultimate post-processing go composes and stylizes the end result.

Creating the Mouse Path Canvas
The pipeline begins with a 2D canvas producing a easy black-on-white round masks. That is then wrapped in a Three.js CanvasTexture, so the fluid simulation within the subsequent step can pattern it as a texture every body.
export default class MouseTrail { ...
#createCanvas(width, peak) {
this.canvas = doc.createElement("canvas");
this.canvas.width = width;
this.canvas.peak = peak;
this.ctx = this.canvas.getContext("2nd");
this.lineWidth = Math.max(width * 0.2, 100);
this.ctx.fillStyle = "white";
this.ctx.fillRect(0, 0, width, peak);
}
#createTexture() {
this.texture = new THREE.CanvasTexture(this.canvas);
this.texture.minFilter = THREE.LinearFilter;
this.texture.magFilter = THREE.LinearFilter;
this.texture.generateMipmaps = false;
}
// ...
}
Updating The Path
Every body, the path easily follows the cursor (utilizing linear interpolation), stopping jagged strains within the fluid simulation. When the cursor stops, the path fades out, letting the stable scene restore itself. The draw methodology merely clears the canvas and strokes a single thick line that seems with motion and fades when idle.
export default class MouseTrail { ...
replace(mouseX, mouseY) {
const targetX = mouseX * this.canvas.width;
const targetY = mouseY * this.canvas.peak;
if (this.currentX === null) {
this.currentX = targetX;
this.currentY = targetY;
this.lastX = targetX;
this.lastY = targetY;
return;
}
this.#lerp(targetX, targetY);
this.#updateOpacity();
this.#draw();
this.lastX = this.currentX;
this.lastY = this.currentY;
this.texture.needsUpdate = true;
}
#draw() {
const { canvas, ctx, lineWidth } = this;
ctx.fillStyle = "white";
ctx.fillRect(0, 0, canvas.width, canvas.peak);
if (this.opacity > 0.01) {
ctx.beginPath();
ctx.moveTo(this.lastX, this.lastY);
ctx.lineTo(this.currentX, this.currentY);
ctx.lineCap = "spherical";
ctx.lineWidth = lineWidth;
ctx.strokeStyle = `rgba(0, 0, 0, ${this.opacity})`;
ctx.stroke();
}
}
// ...
}
Remodeling the Mouse Path right into a Fluid
The fluid simulation takes the mouse path canvas as enter, remodeling it right into a dynamic fluid impact. On every body, the path is subtle outward, modulated with FBM (Fractional Brownian movement) noise, and step by step fades to white.
Implementing a Suggestions Loop with Ping-Pong Rendering
This makes use of a way referred to as ping-pong rendering. Two render targets are maintained, and every body one is learn from whereas the opposite is written to, then they’re swapped. The pair is critical as a result of the GPU can’t learn and write the identical texture in a single go. Goal A holds the earlier body’s end result, the shader samples it and writes to Goal B, then they commerce locations and the cycle continues.
export default class FluidSim { ...
#createRenderTargets() {
const opts = {
minFilter: THREE.LinearFilter,
magFilter: THREE.LinearFilter,
depthBuffer: false,
stencilBuffer: false,
};
this.targetA = new THREE.RenderTarget(this.width, this.peak, opts);
this.targetB = new THREE.RenderTarget(this.width, this.peak, opts);
this.prevNode = texture(this.targetA.texture);
this.maskNode = texture(this.targetA.texture);
}
#createFBOScene() {
this.fboScene = new THREE.Scene();
this.fboCamera = new THREE.OrthographicCamera(-1, 1, 1, -1, -1, 1);
this.inputNode = texture(new THREE.Texture());
const materials = new MeshBasicNodeMaterial();
materials.colorNode = this.#createFluidShader();
const geo = new THREE.PlaneGeometry(2, 2);
// Flip geometry UVs Y so render goal read-back is self-consistent in WebGPU
const uvAttr = geo.attributes.uv;
for (let i = 0; i < uvAttr.depend; i++) {
uvAttr.setY(i, 1.0 - uvAttr.getY(i));
}
this.fboQuad = new THREE.Mesh(geo, materials);
this.fboScene.add(this.fboQuad);
}
replace(renderer, trailTexture) {
this.prevNode.worth = this.targetA.texture;
this.inputNode.worth = trailTexture;
renderer.setRenderTarget(this.targetB);
renderer.render(this.fboScene, this.fboCamera);
renderer.setRenderTarget(null);
// Replace masks to learn from the just-rendered goal
this.maskNode.worth = this.targetB.texture;
// Swap
const temp = this.targetA;
this.targetA = this.targetB;
this.targetB = temp;
}
// ...
}
The prevNode and maskNode are TSL texture nodes that act because the bridge between this simulation and the remainder of the pipeline. prevNode is what the shader samples from through the fluid go, maskNode is what the post-processing compositor reads from downstream.
The simulation runs in its personal scene with an orthographic digicam and a fullscreen quad, so each pixel within the render goal will get processed by the fluid shader.
Every body, the replace methodology units prevNode to the final rendered body, passes within the present mouse path texture, renders the fluid shader to the opposite goal, updates maskNode to the end result, and swaps.
Constructing the Fluid Shader
The shader samples FBM noise to generate a small UV offset per pixel, giving the fluid a turbulent, uneven look. With out it, the fluid spreads evenly, making a flat blur. The noise runs at excessive frequency throughout 4 octaves, then is scaled down simply sufficient to introduce delicate motion with out breaking apart the feel.
#createFluidShader() { ...
const side = this.peak / this.width;
const aspectVec = this.width < this.peak ? vec2(1.0, 1.0 / side) : vec2(side, 1.0);
return Fn(() => { ...
const uvCoord = uv();
const disp = mul(mul(fbm(mul(uvCoord, 20.0), float(4)), aspectVec), 0.01);
// ...
}
}
The aspectVec adjusts for UV coordinates being normalized from 0 to 1, making certain the displacement doesn’t stretch on non-square viewports.
Every body, the earlier body is sampled at 5 positions: the present pixel and 4 neighbors offset by the noise. The darkest worth from these samples is stored utilizing min(). As a result of the path paints black on white, this makes darkish areas bleed outward, creating the spreading. The noise offsets make sure the end result doesn’t appear like a uniform blur.
#createFluidShader() { ...
const blendDarken = Fn(([base, blend]) => min(mix, base));
return Fn(() => { ...
const texel = this.prevNode.pattern(uvCoord);
const texel2 = this.prevNode.pattern(vec2(add(uvCoord.x, disp.x), uvCoord.y));
const texel3 = this.prevNode.pattern(vec2(sub(uvCoord.x, disp.x), uvCoord.y));
const texel4 = this.prevNode.pattern(vec2(uvCoord.x, add(uvCoord.y, disp.y)));
const texel5 = this.prevNode.pattern(vec2(uvCoord.x, sub(uvCoord.y, disp.y)));
const floodcolor = texel.rgb.toVar();
floodcolor.assign(blendDarken(floodcolor, texel2.rgb));
floodcolor.assign(blendDarken(floodcolor, texel3.rgb));
floodcolor.assign(blendDarken(floodcolor, texel4.rgb));
floodcolor.assign(blendDarken(floodcolor, texel5.rgb));
// ...
}
}
The brand new mouse path is mixed in the identical manner. Darker areas of the path overwrite lighter values, letting the latest actions present by means of.
#createFluidShader() { ...
return Fn(() => { ...
const flippedUV = vec2(uvCoord.x, sub(float(1.0), uvCoord.y));
const enter = this.inputNode.pattern(flippedUV);
const mixed = blendDarken(floodcolor, enter.rgb);
// ...
}
// ...
}
A small quantity of white is added every body and clamped to 1.0. Darkish pixels step by step drift again towards white, so when the cursor stops, the fluid slowly fades and the stable scene reappears. At 0.015 per body, it takes roughly one second at 60 fps for a totally black pixel to return to white.
#createFluidShader() { ...
return Fn(() => { ...
return min(vec3(1.0), add(mixed, vec3(0.015)));
}
// ...
}
The Masks Output
The output is a grayscale texture up to date each body. White means present the stable scene, black means reveal the skeleton. The maskNode exposes this as a TSL texture node that plugs straight into the post-processing compositor.
Instancing the Stable & X-Ray Scenes
Your complete reveal impact depends on two scenes rendered with the identical structure and digicam angle. One scene reveals the stable physique, the opposite the skeleton. Each are composited later within the post-processing pipeline, so even slight variations between them will trigger the reveal to seem incorrect.

Each scenes share a digicam, surroundings map, fog, and lighting setup. The one variations are the fashions themselves and a few minor materials tweaks on the skeleton. The whole lot else is equivalent.
export default class Scene { ...
#createScene() {
const scene = new THREE.Scene();
scene.fog = new THREE.Fog(0x000000, 1, 3);
scene.background = new THREE.Shade(0x000000);
scene.surroundings = this.envMap;
scene.environmentIntensity = 0.1;
const gentle = new THREE.PointLight(0xffffff, 0.75);
gentle.place.set(1, 2, 1);
scene.add(gentle);
return scene;
}
// ...
}
The #createScene() methodology is named twice, as soon as for solidScene and as soon as for wireScene. Fog and a black background fade the figures on the edges, stopping them from reducing sharply in opposition to the darkness. The surroundings map is generated from RoomEnvironment and processed by means of a PMREM generator, offering delicate ambient gentle with out including a number of particular person lights. The depth is stored low at 0.1, because the Fresnel materials contributes a lot of the visible weight.
Positioning & Instancing the Fashions
Twelve copies of every mannequin are rendered, however solely two draw calls are used, one per scene, because of InstancedMesh. The InstancedModel class masses a DRACO-compressed .glb, extracts the geometry by mesh title, applies the Fresnel materials, and arranges all cases in a grid.
export default class InstancedModel { ...
#setPositions(mesh) {
const { depend, spacing } = this;
const gridSize = Math.ceil(Math.sqrt(depend));
const halfSize = ((gridSize - 1) * spacing) / 2;
const spacingZ = spacing * 0.65;
const halfSizeZ = ((gridSize - 1) * spacingZ) / 2;
const dummy = new THREE.Object3D();
for (let i = 0; i < depend; i++) {
const x = i % gridSize;
const z = Math.flooring(i / gridSize);
const xOffset = z % 2 === 1 ? spacing / 2 : 0;
dummy.place.set(
x * spacing - halfSize + xOffset,
0,
z * spacingZ - halfSizeZ,
);
dummy.updateMatrix();
mesh.setMatrixAt(i, dummy.matrix);
}
mesh.instanceMatrix.needsUpdate = true;
}
// ...
}
The grid makes use of a hexagonal stagger. Each different row will get offset by half a spacing unit on the X axis. This stops it wanting like a inflexible spreadsheet and provides it a extra pure, packed association. The Z spacing is compressed to 0.65 of the X spacing so the grid feels tighter entrance to again, which works higher with the digicam angle used.
Matching the Skeleton to the Physique
To get the skeleton to take a seat appropriately contained in the physique, each fashions have to occupy the identical area. Precise topology isn’t required; the skeleton simply wants to suit neatly inside the physique mesh. In Blender, centre each fashions on the origin, match their scale, apply all transforms, and export them as .glb recordsdata with DRACO compression.

Constructing the Glowing Materials
That is what provides the figures their look. The Fresnel impact makes edges glow vivid whereas surfaces going through the digicam keep darkish, creating that X-ray, hologram really feel. We combine between a near-black core and a vivid blue on the edges, then pipe that very same color into the emissive channel so the figures glow on their very own with no need sturdy scene lighting.

export operate createFresnelMaterial({
heightMax = 1.0,
roughness = 1.0,
colour = vec3(0.2, 0.6, 1.0),
emissiveIntensity = 0.75,
}) {
const materials = new MeshStandardNodeMaterial({
metalness: 0,
roughness,
});
const fresnel = pow(
sub(float(1.0), normalView.dot(positionViewDirection.negate())),
float(1.0),
);
const coreColor = vec3(0.0, 0.05, 0.1);
const fresnelColor = combine(coreColor, colour, fresnel);
const heightFade = smoothstep(0.5, heightMax, positionLocal.y);
const finalColor = fresnelColor.mul(heightFade);
materials.colorNode = finalColor;
materials.emissiveNode = finalColor.mul(emissiveIntensity);
return materials;
}
Each fashions are minimize on the torso to save lots of vertices. A smoothstep alongside native Y fades the underside to black, hiding the onerous edge and creating the looks of sunshine falloff.
Including Digital camera Motion with Contact Fallback
Each scenes share a single PerspectiveCamera with a slim 17° discipline of view. The tight FOV compresses depth, making the grid really feel like a wall of figures somewhat than a scattered crowd. The digicam follows the cursor with a easy, damped ease whereas sustaining a set look level, including a delicate sense of depth throughout motion.
Constructing the Publish-Processing Pipeline
That is the place all the pieces comes collectively. The PostProcessing class takes each scenes, the digicam, and the fluid masks, compositing them into the ultimate picture by means of a series of TSL results.
export default class PostProcessing { ...
constructor(renderer, solidScene, wireScene, digicam, fluidMaskNode) { ...
this.pipeline = new THREE.RenderPipeline(renderer);
this.#compose();
// ...
}
#compose() { ...
const solidPass = go(this.solidScene, this.digicam);
const solidColor = solidPass.getTextureNode("output");
const wirePass = go(this.wireScene, this.digicam);
const wireColor = wirePass.getTextureNode("output");
// ...
}
}
Every scene has its personal render go, producing a texture node that may be sampled downstream.
Bloom impacts solely the stable scene, including a delicate glow to the Fresnel edges whereas preserving the skeleton’s element (I felt the scene misplaced plenty of its mojo when the bloom was utilized on the skeleton scene).
export default class PostProcessing { ...
#compose() { ...
const bloomPass = bloom(solidColor.pattern(screenUV), 0.4, 0.05);
// ...
}
// ...
}
Scan strains are layered over the bloom. A high-frequency sine wave alongside the display screen’s Y axis is clamped to unfavorable values, darkening the picture and holding the impact subtractive somewhat than including brightness.
export default class PostProcessing { ...
#compose() { ...
const scanRaw = sin(mul(screenUV.y, float(1250.0)));
const scanDarken = clamp(scanRaw, -1.0, 0.0).mul(-0.15);
const scanLines = sub(float(1.0), scanDarken);
const bloomWithScanLines = bloomPass.mul(scanLines);
// ...
}
// ...
}
The fluid masks composite kinds the core of the impact. The masks is inverted and used to mix between the processed stable scene and the uncooked wire scene.
export default class PostProcessing { ...
#compose() { ...
const fluidMask = sub(float(1.0), this.fluidMaskNode.pattern(screenUV).r);
const blended = combine(
bloomWithScanLines,
wireColor.pattern(screenUV),
fluidMask,
);
// ...
}
// ...
}
After that it’s simply ambiance. Movie grain so the picture doesn’t really feel too clear, a slight desaturation to drag again the blue a bit, and a color grade that mixes darkish blue into the blacks to carry the shadows. Actually these have been all simply tweaked by eye till it felt proper.
export default class PostProcessing { ...
#compose() { ...
const noise = mx_noise_float(
vec3(screenUV.mul(2000.0), time.mul(20.0)),
).mul(0.015);
const withEffects = blended.sub(noise);
const luminance = dot(withEffects, vec3(0.299, 0.587, 0.114));
const desaturated = combine(
vec3(luminance, luminance, luminance),
withEffects,
float(0.985),
);
const lowContrast = combine(vec3(0.0, 0.0, 0.2), desaturated, float(0.9));
this.pipeline.outputNode = lowContrast;
// ...
}
// ...
}
Understanding the Render Loop
The orchestration is straightforward. Every body updates the scene, feeds the mouse place into the path, runs the fluid simulation from that enter, and renders the post-processing pipeline.
class Three { ...
#animate() {
const delta = this.clock.getDelta();
this.scene.animate(delta, this.clock.elapsedTime);
// Replace mouse path → fluid sim
this.mouseTrail.replace(
this.scene.cameraRig.mouseNormalized.x,
this.scene.cameraRig.mouseNormalized.y,
);
this.fluidSim.replace(this.context.renderer, this.mouseTrail.texture);
// Render all the pieces (scene passes + results)
this.postProcessing.render();
requestAnimationFrame(() => this.#animate());
}
// ...
}
The Remaining Product
Right here’s the ultimate impact with all the pieces wired up. Mouse path, fluid simulation, each instanced scenes, and the post-processing pipeline all working collectively.
Conclusion
If you wish to take it additional, all the pieces right here is modular. Swap the fashions, change the fluid behaviour, tweak the post-processing and also you’ve obtained one thing utterly completely different. I’m at all times experimenting with this sort of stuff so be happy to achieve out on X @sinzvii when you’ve got questions or simply wish to chat about Three.js. Thanks for studying.


