22.4 C
New York
Monday, June 30, 2025

Invisible Forces: The Making of Phantom.land’s Interactive Grid and 3D Face Particle System


From the outset, we knew we needed one thing that subverted any standard company web site formulation. As an alternative,
impressed by the unseen power that drives creativity, connection and transformation, we arrived on the thought of
invisible forces
. May we take the highly effective but intangible components that form our world—movement, emotion, instinct, and
inspiration—and manifest them in a digital house?

We had been enthusiastic about creating one thing that included many customized interactions and a really experiential really feel. Nevertheless,
our concern was selecting a set of instruments that will enable most of our builders to contribute to and preserve the location
after launch.

We selected to begin from a Subsequent / React base, as we regularly do at Phantom. React additionally has the benefit of being
appropriate with the superb React Three Fiber library, which we used to seamlessly bridge the hole between our DOM
elements and the WebGL contexts used throughout the location. For kinds, we’re utilizing our very personal
CSS elements
in addition to SASS.

For interactive behaviours and animation, we selected to make use of GSAP for 2 important causes. Firstly, it comprises a number of
plugins we all know and love, akin to SplitText, CustomEase and ScrollTrigger. Secondly, GSAP permits us to make use of a single
animation framework throughout DOM and WebGL elements.

We might go on and on speaking concerning the particulars behind each single animation and micro-interaction on the location, however
for this piece we’ve chosen to focus our consideration on two of probably the most distinctive elements of our website: the homepage
grid and the scrollable worker face particle carousel.

The Homepage Grid

It took us a really very long time to get this view to carry out and really feel simply how we needed it to. On this article, we’ll give attention to the interactive half. For more information on how we made issues performant, head to our earlier article: Welcome again to Phantomland

Grid View

The mission’s grid view is built-in into the homepage by incorporating a primitive Three.js object right into a React
Three Fiber scene.

//GridView.tsx
const GridView = () => {
  return (
    <Canvas>
      ...
      <ProjectsGrid />
      <Postprocessing />
    </Canvas>
  );
}

//ProjectsGrid.tsx
const ProjectsGrid = ({atlases, tiles}: Props) => {
  const {canvas, digicam} = useThree();
  
  const grid = useMemo(() => {
    return new Grid(canvas, digicam, atlases, tiles);
  }, [canvas, camera, atlases, tiles]);

  if(!grid) return null;
  return (
    <primitive object={grid} />
  );
}

We initially needed to write down all of the code for the grid utilizing React Three Fiber however realised that, because of the
complexity of our grid part, a vanilla
Three.js
class can be simpler to keep up.

One of many key components that offers our grid its iconic really feel is our post-processing distortion impact. We carried out
this characteristic by making a customized shader cross inside our post-processing pipeline:

// Postprocessing.tsx
const Postprocessing = () => {
  const {gl, scene, digicam} = useThree();
  
  // Create Impact composer
  const {effectComposer, distortionShader} = useMemo(() => {
    const renderPass = new RenderPass(scene, digicam);
    const distortionShader = new DistortionShader();
    const distortionPass = new ShaderPass(distortionShader);
    const outputPass = new OutputPass();

    const effectComposer = new EffectComposer(gl);
    effectComposer.addPass(renderPass);
    effectComposer.addPass(distortionPass);
    effectComposer.addPass(outputPass);

    return {effectComposer, distortionShader};
  }, []);
  
  // Replace distortion depth
  useEffect(() => {
    if (workgridState === WorkgridState.INTRO) {
      distortionShader.setDistortion(CONFIG.distortion.flat);
    } else {
      distortionShader.setDistortion(CONFIG.distortion.curved);
    }
  }, [workgridState, distortionShader]);
  
  // Replace distortion depth
  useFrame(() => {
    effectComposer.render();
  }, 1);
 
  return null;
}

When the grid transitions out and in on the location, the distortion depth modifications to make the transition really feel
pure. This animation is completed by a easy tween in our
DistortionShader
class:

class DistortionShader extends ShaderMaterial {
  personal distortionIntensity = 0;

  tremendous({
      title: 'DistortionShader',
      uniforms: {
        distortionIntensity: {worth: new Vector2()},
        ...
      },
      vertexShader,
      fragmentShader,
  });

  replace() {
    const ratio = window.innerWidth, window.innerHeight;
    this.uniforms[DistortionShaderUniforms.DISTORTION].worth.set(
      this.distortionIntensity * ratio,
      this.distortionIntensity * ratio,
    );
  }

  setDistortion(worth: quantity) {
    gsap.to(this, {
      distortionIntensity: worth,
      length: 1,
      ease: 'power2.out',
      onUpdate: () => this.replace()    }
  }
}

Then the distortion is utilized by our customized shader:

// fragment.ts
export const fragmentShader = /* glsl */ `
  uniform sampler2D tDiffuse;
  uniform vec2 distortion;
  uniform float vignetteOffset;
  uniform float vignetteDarkness;

  various vec2 vUv;
  
  // convert uv vary from 0 -> 1 to -1 -> 1
  vec2 getShiftedUv(vec2 uv) {
    return 2. * (uv - .5);
  }
  
  // convert uv vary from -1 -> 1 to 0 -> 1
  vec2 getUnshiftedUv(vec2 shiftedUv) {
    return shiftedUv * 0.5 + 0.5;
  }


  void important() {
    vec2 shiftedUv = getShiftedUv(vUv);
    float distanceToCenter = size(shiftedUv);
    
    // Lens distortion impact
    shiftedUv *= (0.88 + distortion * dot(shiftedUv));
    vec2 transformedUv = getUnshiftedUv(shiftedUv);
    
    // Vignette impact
    float vignetteIntensity = smoothstep(0.8, vignetteOffset * 0.799,  (vignetteDarkness + vignetteOffset) * distanceToCenter);
    
    // Pattern render texture and output fragment
    shade = texture2D( tDiffuse, distortedUV ).rgb * vignetteIntensity;
    gl_FragColor = vec4(shade, 1.);
  }

We additionally added a vignette impact to our post-processing shader to darken the corners of the viewport, focusing the
person’s consideration towards the middle of the display.

To be able to make our residence view as easy as potential, we additionally spent a good period of time crafting the
micro-interactions and transitions of the grid.

Ambient mouse offset

When the person strikes their cursor across the grid, the grid strikes barely in the other way, creating a really
delicate ambient floating impact. This was merely achieved by calculating the mouse place on the grid and shifting the
grid mesh accordingly:

getAmbientCursorOffset() {
  // Get the pointer coordinates in UV house ( 0 - 1 ) vary
  const uv = this.navigation.pointerUv;
  const offset = uv.subScalar(0.5).multiplyScalar(0.2);
  return offset;
}

replace() {
  ...
  // Apply cursor offset to grid place
  const cursorOffset = getAmbientCursorOffset();
  this.mesh.place.x += cursorOffset.x;
  this.mesh.place.y += cursorOffset.y;
}

Drag Zoom

When the grid is dragged round, a zoom-out impact happens and the digicam appears to pan away from the grid. We created
this impact by detecting when the person begins and stops dragging their cursor, then utilizing that to set off a GSAP
animation with a customized ease for further management.

onPressStart = () => {
  this.animateCameraZ(0.5, 1);
}

onPressEnd = (isDrag: boolean) => {
  if(isDrag) {
    this.animateCameraZ(0, 1);
  }
}

animateCameraZ(distance: quantity, length: quantity) {
  gsap.to(this.digicam.place, {
    z: distance,
    length,
    ease: CustomEase.create('cameraZoom', '.23,1,0.32,1'),
  });
}

Drag Motion

Final however not least, when the person drags throughout the grid and releases their cursor, the grid slides by with a
certain quantity of inertia.

drag(offset: Vector2) {
  this.dragAction = offset;

  // Steadily improve velocity with drag time and distance
  this.velocity.lerp(offset, 0.8);
}

// Each body
replace() {
  // positionOffset is later used to maneuver the grid mesh
  if(this.isDragAction) {
    // if the person is dragging their cursor, add the drag worth to offset
    this.positionOffset.add(this.dragAction.clone());
  } else {
    // if the person is just not dragging, add the speed to the offset
    this.positionOffset.add(this.velocity);
  }

  this.dragAction.set(0, 0);
  // Attenuate velocity with time
  this.velocity.lerp(new Vector2(), 0.1);
}

Face Particles

The second main part we need to spotlight is our worker face carousel, which presents workforce members by a
dynamic 3D particle system. Constructed with React Three Fiber’s
BufferGeometry
and customized GLSL shaders, this implementation leverages customized shader supplies for light-weight efficiency and
flexibility, permitting us to generate total 3D face representations utilizing solely a 2D color {photograph} and its
corresponding depth map—no 3D fashions required.

Core Idea: Depth-Pushed Particle Era

The inspiration of our face particle system lies in changing 2D imagery into volumetric 3D representations. We’ve
saved issues environment friendly, with every face utilizing solely two optimized 256×256 WebP pictures (beneath 15KB every).

To seize the photographs, every member of the Phantom workforce was 3D scanned utilizing
RealityScan
from Unreal Engine on iPhone, making a 3D mannequin of their face.

These scans had been cleaned up after which rendered from Cinema4D with a place and color cross.

The place cross was transformed right into a greyscale depth map in Photoshop, and this—together with the color cross—was
retouched the place wanted, cropped, after which exported from Photoshop to share with the dev workforce.

Every face is constructed from roughly 78,400 particles (280×280 grid), the place every particle’s place and
look is decided by sampling knowledge from our two supply textures.

/* generate positions attributes array */
const POINT_AMOUNT = 280;

const factors = useMemo(() => {
  const size = POINT_AMOUNT * POINT_AMOUNT;
  const vPositions = new Float32Array(size * 3);
  const vIndex = new Float32Array(size * 2);
  const vRandom = new Float32Array(size * 4);

  for (let i = 0; i < size; i++) {
      const i2 = i * 2;
      vIndex[i2] = (i % POINT_AMOUNT) / POINT_AMOUNT;
      vIndex[i2 + 1] = i / POINT_AMOUNT / POINT_AMOUNT;

      const i3 = i * 3;
      const theta = Math.random() * 360;
      const phi = Math.random() * 360;
      vPositions[i3] = 1 * Math.sin(theta) * Math.cos(phi);
      vPositions[i3 + 1] = 1 * Math.sin(theta) * Math.sin(phi);
      vPositions[i3 + 2] = 1 * Math.cos(theta);

      const i4 = i * 4;
      vRandom.set(
        Array(4)
          .fill(0)
          .map(() => Math.random()),
        i4,
      );
  }

  return {vPositions, vRandom, vIndex};
}, []);
// React Three Fiber part construction 
const FaceParticleSystem = ({ particlesData, currentDataIndex }) => {
  return (
    <factors ref={pointsRef} place={pointsPosition}>
      <bufferGeometry>
        <bufferAttribute connect="attributes-vIndex" 
             args={[points.vIndex, 2]} />
        <bufferAttribute connect="attributes-position"
             args={[points.vPositions, 3]} />
        <bufferAttribute connect="attributes-vRandom"
             args={[points.vRandom, 4]} />
      </bufferGeometry>
      
      <shaderMaterial
        mixing={NormalBlending}
        clear={true}
        fragmentShader={faceFrag}
        vertexShader={faceVert}
        uniforms={uniforms}
      />
    </factors>
  );
};

The depth map offers normalized values (0–1) that straight translate to Z-depth positioning. A worth of 0 represents
the furthest level (background), whereas 1 represents the closest level (sometimes the nostril tip).

/* vertex shader */ 

// pattern depth and shade knowledge for every particle
vec3 depthTexture1 = texture2D(depthMap1, vIndex.xy).xyz;

// convert depth to Z-position
float zDepth = (1. - depthValue.z);
pos.z = (zDepth * 2.0 - 1.0) * zScale;

Dynamic Particle Scaling By Color Evaluation

One of many key strategies that brings our faces to life is using color knowledge to affect particle scale. In our
vertex shader, reasonably than utilizing uniform particle sizes, we analyze the color density of every pixel in order that brighter,
extra vibrant areas of the face (like eyes, lips, or well-lit cheeks) generate bigger, extra outstanding particles,
whereas darker areas (shadows, hair) create smaller, subtler particles. The result’s a extra natural, lifelike
illustration that emphasizes facial options naturally.

/* vertex shader */ 

vec3 colorTexture1 = texture2D(colorMap1, vIndex.xy).xyz;

// calculate shade density
float density = (mainColorTexture.x + mainColorTexture.y + mainColorTexture.z) / 3.;

// map density to particle scale
float pScale = combine(pScaleMin, pScaleMax, density);

The calibration beneath demonstrates the affect of color (distinction, brightness, and so forth.) on the ultimate 3D particle formation.

Ambient Noise Animation

To forestall static appearances and preserve visible curiosity, we apply steady noise-based animation to all
particles. This ambient animation system makes use of curl noise to create delicate, flowing motion throughout the whole
face construction.

/* vertex shader */ 

// main curl noise for general motion 
pos += curlNoise(pos * curlFreq1 + time) * noiseScale * 0.1;
// animation updates in React Three Fiber

useFrame((state, delta) => {
  if (!materialRef.present) return;
  
  materialRef.present.uniforms.time.worth = state.clock.elapsedTime * NOISE_SPEED;
  
  // replace rotation primarily based on mouse interplay
  easing.damp(pointsRef.present.rotation, 'y', state.mouse.x * 0.12 * Math.PI, 0.25, delta);
  easing.damp(pointsRef.present.rotation, 'x', -state.pointer.y * 0.05 * Math.PI, 0.25, delta);

});

Face Transition Animation

When transitioning between totally different workforce members, we mix timeline-based interpolation with visible results written
in shader supplies.

GSAP-Pushed Lerp Technique

The transition basis makes use of GSAP timelines to animate a number of shader parameters concurrently:

timelineRef.present = gsap
  .timeline()
  .fromTo(uniforms.transition, {worth: 0}, {worth: 1.3, length: 1.6})
  .to(uniforms.posZ, {worth: particlesParams.offset_z, length: 1.6}, 0)
  .to(uniforms.zScale, {worth: particlesParams.face_scale_z, length: 1.6}, 0);

And the shader handles the visible mixing between two face states:

/* vertex shader */ 

// easy transition curve
float velocity = clamp(transition * combine(0.8, .9, transition), 0., 1.0); 
velocity = smoothstep(0.0, 1.0, velocity); 

// mix textures 
vec3 mainColorTexture = combine(colorTexture1, colorTexture2, velocity); 
vec3 depthValue =combine(depthTexture1, depthTexture2, velocity);

So as to add visible curiosity throughout transitions, we additional inject further noise that’s strongest on the midpoint of the
transition. This creates a delicate “disturbance” impact the place particles quickly deviate from their goal
positions, making transitions really feel extra dynamic and natural.

/* vertex shader */ 

// secondary noise motion utilized for transition
float randomZ = vRandom.y + cnoise(pos * curlFreq2 + t2) * noiseScale2;

float smoothTransition = abs(sin(velocity * PI)); 
pos.x += nxScale * randomZ * 0.1 * smoothTransition; 
pos.y += nyScale *randomZ * 0.1 * smoothTransition;
pos.z += nzScale * randomZ * 0.1 * smoothTransition;

Customized Depth of Discipline Impact

To reinforce the three-dimensional notion, we carried out a customized depth of subject impact straight in our shader
materials. It calculates view-space distance for every particle and modulates each opacity and measurement primarily based on proximity
to a configurable focus aircraft.

/* vertex shader - calculate view distance */

vec4 viewPosition = viewMatrix * modelPosition;
vDistance = abs(focus +viewPosition.z); 

// apply distance to level measurement for blur impact 
gl_PointSize = pointSize * pScale * vDistance * blur * totalScale;
/* fragment shader - calculate distance-based alpha for DOF */


float alpha = (1.04 - clamp(vDistance * 1.5, 0.0, 1.0));
gl_FragColor = vec4(shade, alpha);

Challenges: Unifying Face Scales

One of many challenges we confronted was attaining visible consistency throughout totally different workforce members’ images. Every {photograph}
was captured beneath barely totally different situations—various lighting, digicam distances, and facial proportions.
Due to this fact, we went by every face to calibrate a number of scaling components:

  • Depth scale calibration
    to make sure no nostril protrudes too aggressively
  • Color density balancing
    to keep up constant particle measurement relationships
  • Focus aircraft optimization
    to forestall extreme blur on any particular person face
// particular person face parameters requiring handbook tuning 

particle_params: { 
  offset_z: 0,           // general Z-position
  z_depth_scale: 0,      // depth map scaling issue
  face_size: 0,          // general face scale 
}

Ultimate Phrases

Our face particle system demonstrates how easy but cautious technical implementation can create enjoyable visible
experiences from minimal belongings. By combining light-weight WebP textures, customized shader supplies, and animations,
we’ve created a system that transforms easy 2D portraits into interactive 3D figures.

Take a look at the full website.

Inquisitive about what we’re as much as within the Phantom studio? Or have a mission you assume we’d be taken with? Get in contact.



Supply hyperlink

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles