Horizontal scroll galleries with parallax results have develop into a staple of recent net design. You’ve most likely seen numerous tutorials on this impact, and for good purpose. It’s visually hanging and provides depth to what would in any other case be a easy picture carousel.
However right here’s the factor: most implementations are purely DOM-based, utilizing CSS transforms and JavaScript to maneuver components round. Whereas this works tremendous for easy instances, it may well shortly develop into janky whenever you’re coping with a number of pictures, heavy parallax calculations, and {smooth} animations all working on the primary thread.
You’ve most likely come throughout Camille Mormal’s portfolio a surprising instance of how fluid and performant a horizontal gallery may be. What units it aside? It’s all rendered in WebGL. The smoothness comes from offloading the heavy lifting to the GPU, the place these operations thrive.
This acquired me considering: what if we may construct this impact step-by-step, beginning with a conventional 2D DOM method after which elevating it to WebGL? Not solely would this present the efficiency advantages, however it will additionally demystify how these results really work underneath the hood.
On this tutorial, we’ll create a horizontal parallax gallery in two methods:
- The 2D method: utilizing HTML, CSS, and JavaScript with customized {smooth} scrolling and parallax transforms
- The WebGL method: utilizing Three.js to render all the pieces on the GPU with shader-based parallax for buttery-smooth efficiency
We’ll maintain dependencies minimal (solely Three.js for the 3D half) and give attention to understanding the core mechanics: how {smooth} scrolling works, how parallax is calculated, and the way to synchronize DOM measurements with WebGL rendering.
Let’s dive in.
The Preliminary Setup
For this tutorial, we’ll maintain issues easy and targeted. No complicated construct instruments or heavy dependencies simply Vite for quick improvement and Three.js for the WebGL half in a while.
Venture construction
Right here’s what our undertaking appears like:
├── css/
│ └── base.css
├── public/
│ ├── 1.webp
│ ├── 2.webp
│ └── ... (10 pictures whole)
├── src/
│ ├── gallery/
│ │ └── gallery.css
│ ├── utils/
│ │ └── math.ts
│ └── principal.ts
├── index.html
└── package deal.json
Dependencies
{
"dependencies": {
"three": "^0.170.0"
},
"devDependencies": {
"typescript": "^5.6.3",
"vite": "^6.0.3",
"vite-plugin-glsl": "^1.3.0"
}
}
That’s it. We’re utilizing:
- Vite for bundling and dev server
- TypeScript for kind security
- Three.js (we’ll add this later for the WebGL model)
- vite-plugin-glsl to import shader information (additionally for later)
HTML Construction
Let’s begin with the markup for our gallery. It’s intentionally easy, a wrapper and a container with pictures:
<physique class="demo-1 loading">
<principal>
<div class="content material">
<div class="gallery__wrapper">
<div class="gallery__image__container">
<image class="gallery__media">
<img
src="1.webp"
alt="Picture 1"
class="gallery__media__image"
draggable="false"
/>
</image>
<image class="gallery__media">
<img src="2.webp" alt="Picture 2" class="gallery__media__image" draggable="false" />
</image>
<!-- ... 8 extra pictures -->
</div>
</div>
</div>
</principal>
<script kind="module" src="/src/principal.ts"></script>
</physique>
Just a few issues to notice:
- The
loadingclass on the physique, we’ll take away this as soon as pictures are preloaded draggable="false"prevents the default drag habits- We’re wrapping pictures in

<!-- ... extra pictures -->
</div>
</div>
</physique>
Discover the id="gl" on the physique, that is how principal.ts detects which model to load.
In our CSS, we disguise these DOM pictures as soon as WebGL is prepared:
physique.demo-2 .gallery__media__image__gl {
opacity: 0; /* Cover DOM pictures, present solely WebGL */
}
We maintain the DOM components within the HTML as a result of:
- They allow us to use
getBoundingClientRect()to get positions - They preserve the structure construction (flexbox, gaps, and so forth.)
- They supply fallback if WebGL fails
- They’re our “supply of fact” for sizing and positioning
Constructing GLMedia: Step by Step
Let’s begin with the fundamental construction:
// src/gallery/GLMedia.ts
import * as THREE from "three";
interface Props {
scene: THREE.Group;
aspect: HTMLElement;
viewport: { width: quantity; peak: quantity };
digicam: THREE.PerspectiveCamera;
geometry: THREE.PlaneGeometry;
renderer: THREE.WebGLRenderer;
}
export class GLMedia {
digicam: THREE.PerspectiveCamera;
aspect: HTMLElement;
scene: THREE.Group;
geometry: THREE.PlaneGeometry;
renderer: THREE.WebGLRenderer;
materials: THREE.ShaderMaterial;
texture: THREE.Texture;
viewport: { width: quantity; peak: quantity };
bounds: DOMRect;
mesh: THREE.Mesh;
constructor({ scene, aspect, viewport, digicam, geometry, renderer }: Props) {
this.scene = scene;
this.aspect = aspect;
this.viewport = viewport;
this.digicam = digicam;
this.geometry = geometry;
this.renderer = renderer;
this.bounds = this.aspect.getBoundingClientRect();
this.createMesh();
this.createTexture();
}
}
Nothing fancy but. We’re simply storing references and getting the DOM aspect’s bounding field.
Step 1: Creating the Mesh
Let’s create a fundamental mesh with a shader materials:
createMesh() {
this.materials = new THREE.ShaderMaterial({
uniforms: {
uTexture: { worth: null },
uResolution: 1,
),
,
uImageResolution: { worth: new THREE.Vector2(1, 1) },
},
vertexShader: `
various vec2 vUv;
void principal() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(place, 1.0);
}
`,
fragmentShader: `
precision highp float;
various vec2 vUv;
uniform sampler2D uTexture;
void principal() {
vec3 col = texture2D(uTexture, vUv).rgb;
gl_FragColor = vec4(col, 1.0);
}
`,
});
this.mesh = new THREE.Mesh(this.geometry, this.materials);
this.scene.add(this.mesh);
}
What are these uniforms?
uTexture: The picture texture (we’ll load this subsequent)uResolution: The mesh’s width and peak in pixels, this may match the DOM aspectuImageResolution: The precise picture’s width and peak, we want this forobject-fit: cowlhabits
For now, the shaders are fundamental: simply go via UVs and render the feel.
Step 2: Loading the Texture
createTexture() {
this.texture = new THREE.TextureLoader().load(
this.aspect.getAttribute("src") as string,
(textual content) => {
const materials = this.mesh?.materials as THREE.ShaderMaterial;
if (materials?.uniforms?.uImageResolution) {
materials.uniforms.uImageResolution.worth.set(
textual content.picture.width,
textual content.picture.peak,
);
}
},
);
this.materials.uniforms.uTexture.worth = this.texture;
}
Why do we want uImageResolution?
Right here’s the issue: our mesh could be 800×1000 pixels, however the picture could be 1920×1080. If we simply map the feel on to the mesh, it’ll stretch and deform.
In CSS, we use object-fit: cowl to deal with this, it crops and facilities the picture to fill the container with out distortion. In WebGL, we have to do that manually within the shader.
The uImageResolution uniform shops the unique picture dimensions. Later, we’ll write a shader perform that calculates the right UV coordinates to realize object-fit: cowl habits.
The callback perform fires when the feel hundreds, letting us seize the precise picture dimensions from textual content.picture.width and textual content.picture.peak.
Step 3: Sizing the Mesh
Now let’s make the mesh match the DOM aspect’s measurement:
updateScale() {
this.bounds = this.aspect.getBoundingClientRect();
this.mesh?.scale.set(this.bounds.width, this.bounds.peak, 1);
this.materials?.uniforms.uResolution.worth.set(
this.bounds.width,
this.bounds.peak,
);
}
Bear in mind our digicam setup the place 1 pixel = 1 unit? That is the place it pays off.
We learn the DOM aspect’s dimensions with getBoundingClientRect(), then instantly apply them to the mesh’s scale. Due to our digicam FOV calculation, a scale of (800, 1000, 1) makes the mesh precisely 800×1000 pixels on display.
We additionally replace the uResolution uniform so the shader is aware of the mesh’s dimensions, essential for the object-fit: cowl calculation.
Step 4: Positioning the Mesh
That is the place DOM/WebGL sync will get difficult. Three.js makes use of a center-origin coordinate system, however getBoundingClientRect() provides us top-left positions. We have to convert between them.
However there’s one other problem: horizontal scrolling. Bear in mind our {smooth} scroll system in principal.ts? The scroll.present worth that’s being lerped each body? We have to account for that in our WebGL positioning.
updatePosition(scroll: quantity) {
const x =
this.bounds.left -
scroll -
this.viewport.width / 2 +
this.bounds.width / 2;
const y =
-this.bounds.high + this.viewport.peak / 2 - this.bounds.peak / 2;
this.mesh.place.set(x, y, 0);
}
Breaking down the maths:
X place (horizontal):
const x = this.bounds.left - scroll - this.viewport.width / 2 + this.bounds.width / 2;
Let’s go step-by-step:
this.bounds.left→ DOM aspect’s left edge in pixels from viewport left (fromgetBoundingClientRect())- scroll→ That is essential. Thescrollparameter comes fromprincipal.ts‘sscroll.presentworth, our easily lerped horizontal scroll place. Because the consumer scrolls, this worth will increase, and we have to shift all meshes left by that quantity to simulate horizontal motion. With out this, the meshes would keep mounted whereas the DOM scrolls.- this.viewport.width / 2→ Shift origin from left edge to heart. Three.js positions meshes from their heart level, however the viewport’s origin (0,0) is on the heart of the display, not the top-left just like the DOM.+ this.bounds.width / 2→ Offset by half the mesh width. SincegetBoundingClientRect()provides us the left fringe of the aspect, however Three.js positions from the middle, we have to shift proper by half the mesh’s width.
Y place (vertical):
const y = -this.bounds.high + this.viewport.peak / 2 - this.bounds.peak / 2;
-this.bounds.high→ Invert Y axis (DOM counts down, WebGL counts up)+ this.viewport.peak / 2→ Shift origin from high to heart- this.bounds.peak / 2→ Offset by half the mesh peak
This components ensures the WebGL mesh sits precisely the place the DOM aspect can be.
Placing It Collectively
Let’s replace our GL.ts to instantiate these meshes:
createGallery() {
this.allMedias = this.medias.map((media) => {
return new GLMedia({
scene: this.group,
aspect: media,
viewport: this.display,
digicam: this.digicam,
geometry: this.geometry,
renderer: this.renderer,
});
});
this.scene.add(this.group);
}
And add the render methodology to GLMedia:
render(scroll: quantity) {
this.updatePosition(scroll);
}
onResize(viewport: { width: quantity; peak: quantity }) {
this.viewport = viewport;
this.updateScale();
}
What You Ought to See
At this level, when you open index2.html, you need to see… stretched, distorted pictures positioned appropriately however wanting horrible.

Why? As a result of we’re mapping textures 1:1 to the mesh UVs with out accounting for side ratio variations. A 1920×1080 picture stretched onto an 800×1000 mesh appears terrible.
That is precisely why we want the object-fit: cowl equal in our shader.
Reaching object-fit: cowl in Shaders
Bear in mind these stretched, distorted pictures? That’s as a result of we’re mapping the feel on to the mesh with out accounting for side ratio variations. In CSS, object-fit: cowl handles this robotically. In WebGL, we have to do it ourselves.
The coverUv() Perform
Let’s add this perform to our fragment shader:
// src/shaders/mediaFragment.glsl
precision highp float;
various vec2 vUv;
uniform sampler2D uTexture;
uniform vec2 uResolution;
uniform vec2 uImageResolution;
vec2 coverUv(vec2 uv, vec2 decision, vec2 imageResolution) {
vec2 ratio = vec2(
min((decision.x / decision.y) / (imageResolution.x / imageResolution.y), 1.0),
min((decision.y / decision.x) / (imageResolution.y / imageResolution.x), 1.0)
);
return vec2(
uv.x * ratio.x + (1.0 - ratio.x) * 0.5,
uv.y * ratio.y + (1.0 - ratio.y) * 0.5
);
}
void principal() {
vec2 uv = coverUv(vUv, uResolution, uImageResolution);
vec3 col = texture2D(uTexture, uv).rgb;
gl_FragColor = vec4(col, 1.);
}
How coverUv() Works
This perform calculates the right UV coordinates to crop and heart the picture, identical to object-fit: cowl.
Step 1: Calculate side ratios
vec2 ratio = vec2(
min((decision.x / decision.y) / (imageResolution.x / imageResolution.y), 1.0),
min((decision.y / decision.x) / (imageResolution.y / imageResolution.x), 1.0)
);
We’re evaluating two side ratios:
decision→ the mesh dimensions (e.g., 800×1000)imageResolution→ the precise picture dimensions (e.g., 1920×1080)
For every axis, we calculate how a lot we have to scale to suit. The min(..., 1.0) ensures we solely scale down, by no means up.
Instance:
- Mesh: 800×1000 (side ratio 0.8)
- Picture: 1920×1080 (side ratio 1.78)
The picture is wider than the mesh. To cowl the mesh peak clever, we have to crop the perimeters.
ratio.x = min((0.8) / (1.78), 1.0) = min(0.45, 1.0) = 0.45ratio.y = min((1.25) / (0.56), 1.0) = min(2.23, 1.0) = 1.0
Step 2: Apply the ratio and heart
return vec2(
uv.x * ratio.x + (1.0 - ratio.x) * 0.5,
uv.y * ratio.y + (1.0 - ratio.y) * 0.5
);
We scale the UVs by the ratio, then offset by half the remaining area to heart the crop.
Utilizing our instance:
- X:
uv.x * 0.45 + (1.0 - 0.45) * 0.5 = uv.x * 0.45 + 0.275 - Y:
uv.y * 1.0 + 0.0 = uv.y
This implies:
- Horizontally, we’re solely utilizing 45% of the picture width, centered
- Vertically, we’re utilizing the complete picture peak
The outcome: the picture covers all the mesh with out distortion, cropping the perimeters as wanted.
Earlier than and After
With out coverUv(), your pictures look stretched and flawed. With it, they give the impression of being good, correctly cropped and centered identical to object-fit: cowl in CSS.
Now your WebGL gallery ought to look equivalent to the DOM model (minus the parallax, which we’re about so as to add).

Parallax in WebGL
That is the place WebGL actually shines. As a substitute of calculating positions for every picture in JavaScript and making use of CSS transforms, we are able to do the parallax impact solely within the shader on the GPU.
The Idea
Bear in mind how we did parallax in 2D? We:
- Made the picture 125% broad with a -12.5% offset (CSS)
- Calculated every picture’s place within the viewport (JS)
- Utilized a
translateXremodel (JS → CSS)
In WebGL, we do one thing related however in UV area:
- We scale the UVs to make the feel smaller (like zooming out), creating area round it
- We calculate the picture’s place within the viewport (JS)
- We shift the UVs horizontally primarily based on that place (shader)
Including the Parallax Uniform
First, add the uParallax uniform to our materials in GLMedia.ts
createMesh() {
this.materials = new THREE.ShaderMaterial({
uniforms: {
uTexture: { worth: null },
uResolution: 1,
),
,
uImageResolution: { worth: new THREE.Vector2(1, 1) },
uParallax: { worth: 0 }, // Add this
},
vertexShader: vertex,
fragmentShader: fragment,
});
this.mesh = new THREE.Mesh(this.geometry, this.materials);
this.scene.add(this.mesh);
}
Calculating the Parallax Worth
Add this methodology to GLMedia.ts:
updateParallax(scroll: quantity) {
if (!this.bounds) return;
const { innerWidth } = window;
const elementLeft = this.bounds.left - scroll;
const elementRight = elementLeft + this.bounds.width;
// Solely calculate parallax for seen components
if (elementRight >= 0 && elementLeft <= innerWidth) {
// Calculate place relative to viewport heart
// Vary from -1 to 1 as aspect strikes via viewport
const elementCenter = elementLeft + this.bounds.width / 2;
const viewportCenter = innerWidth / 2;
const distance = (elementCenter - viewportCenter) / innerWidth;
// Apply parallax impact
const parallaxValue = distance * 0.4;
this.materials.uniforms.uParallax.worth = parallaxValue;
}
}
Breaking it down:
Step 1: Calculate aspect place
const elementLeft = this.bounds.left - scroll;
const elementRight = elementLeft + this.bounds.width;
We get the aspect’s left and proper edges relative to the viewport, accounting for scroll.
Step 2: Examine visibility
if (elementRight >= 0 && elementLeft <= innerWidth)
Solely calculate parallax for pictures at the moment seen on display. Small optimization.
Step 3: Calculate distance from heart
const elementCenter = elementLeft + this.bounds.width / 2;
const viewportCenter = innerWidth / 2;
const distance = (elementCenter - viewportCenter) / innerWidth;
We discover the aspect’s heart level, examine it to the viewport heart, and normalize the outcome by dividing by viewport width.
This provides us a worth that:
- Is damaging when the aspect is left of heart
- Is 0 when the aspect is centered
- Is optimistic when the aspect is true of heart
- Ranges roughly from -1 to 1
Step 4: Apply depth
const parallaxValue = distance * 0.4;
We multiply by 0.4 to regulate the depth. That is your “parallax energy” knob:
0.2→ refined impact0.4→ what we’re utilizing0.6→ dramatic impact
Replace the Render Methodology
Don’t overlook to name updateParallax each body:
render(scroll: quantity) {
this.updateParallax(scroll);
this.updatePosition(scroll);
}
Making use of Parallax within the Shader
Now replace the fragment shader to make use of the parallax worth:
// src/shaders/mediaFragment.glsl
precision highp float;
various vec2 vUv;
uniform sampler2D uTexture;
uniform vec2 uResolution;
uniform vec2 uImageResolution;
uniform float uParallax;
vec2 coverUv(vec2 uv, vec2 decision, vec2 imageResolution) {
vec2 ratio = vec2(
min((decision.x / decision.y) / (imageResolution.x / imageResolution.y), 1.0),
min((decision.y / decision.x) / (imageResolution.y / imageResolution.x), 1.0)
);
return vec2(
uv.x * ratio.x + (1.0 - ratio.x) * 0.5,
uv.y * ratio.y + (1.0 - ratio.y) * 0.5
);
}
void principal() {
vec2 uv = coverUv(vUv, uResolution, uImageResolution);
// Apply parallax impact by shifting UVs horizontally
uv.x += uParallax * 1.0;
// Scale UVs to create area for parallax motion
// That is like making the picture smaller inside its body
uv -= 0.5;
uv *= 0.85; // Scale all the way down to 85% (leaving 15% area for motion)
uv += 0.5;
vec3 col = texture2D(uTexture, uv).rgb;
gl_FragColor = vec4(col, 1.);
}
What’s taking place right here:
Step 1: Shift UVs
uv.x += uParallax * 1.0;
We offset the UVs horizontally by the parallax worth. Because the picture strikes via the viewport:
- Left of heart →
uParallaxis damaging → texture shifts left - Proper of heart →
uParallaxis optimistic → texture shifts proper
The * 1.0 multiplier controls how a lot the feel strikes. You possibly can regulate this for stronger/weaker results.
Step 2: Create motion area
uv -= 0.5;
uv *= 0.85;
uv += 0.5;
That is the shader equal of our 2D CSS trick (125% width, -12.5% offset).
We:
- Middle the UVs round (0, 0) by subtracting 0.5
- Scale them all the way down to 85% (making the feel successfully smaller)
- Shift again to (0.5, 0.5) to re-center
The outcome: the feel is now 85% of its regular measurement, with 15% of empty area round it. After we shift the UVs with uv.x += uParallax, we’re shifting inside that 15% buffer.
Why 0.85?
This worth controls how a lot “room” the feel has to maneuver:
0.9→ much less area, extra refined parallax (however danger of displaying edges)0.85→ good steadiness (what we’re utilizing)0.8→ more room, stronger parallax doable (however texture seems extra zoomed out)
You possibly can regulate this primarily based in your parallax depth. Simply be sure the parallax shift doesn’t exceed the obtainable area, otherwise you’ll see the feel edges.
The Full Image
Now you might have:
- JavaScript calculating every picture’s place within the viewport
- JavaScript passing a normalized parallax worth to the shader
- Shader shifting the feel’s UVs primarily based on that worth
- Shader working inside a scaled-down UV area (the “buffer zone”)
All of this occurs on the GPU, each body, for each pixel. No DOM manipulation, no structure recalculations, simply pure, {smooth}, GPU-accelerated rendering.
Evaluating 2D vs WebGL Parallax
2D (DOM):
- CSS creates bodily area (125% width)
- JS calculates place by way of
getBoundingClientRect() - JS applies
remodel: translateX()to every picture - Browser composites on GPU (hopefully)
WebGL:
- Shader creates UV area (0.85 scale)
- JS calculates place by way of
getBoundingClientRect() - JS passes worth to uniform
- Shader shifts UVs and samples texture
- The whole lot rendered on GPU
The tip outcome appears related, however the WebGL model:
- Offloads extra work to the GPU
- Avoids touching the DOM each body
- Handles extra pictures with higher efficiency
- Opens the door to extra complicated shader results
Tweaking Values with lil-gui (Non-compulsory)
Wish to experiment with the parallax impact and actually perceive how every worth impacts the outcome? Let’s add lil-gui, a light-weight GUI controller.
npm set up lil-gui
Add this to your GL.ts:
import * as THREE from "three";
import { GLMedia } from "./GLMedia";
import GUI from "lil-gui";
export class GL {
// ... current properties
gui: GUI;
params = {
parallaxIntensity: 0.4,
uvScale: 0.85,
shaderMultiplier: 1.0,
};
constructor() {
// ... current setup
this.setupGUI();
}
setupGUI() {
this.gui = new GUI();
this.gui
.add(this.params, "parallaxIntensity", 0, 1, 0.01)
.identify("Parallax Depth")
.onChange((worth: quantity) => {
this.allMedias.forEach((media) => {
media.parallaxIntensity = worth;
});
});
this.gui
.add(this.params, "uvScale", 0.7, 1.0, 0.01)
.identify("UV Scale (Buffer)")
.onChange((worth: quantity) => {
this.allMedias.forEach((media) => {
media.materials.uniforms.uUvScale.worth = worth;
});
});
this.gui
.add(this.params, "shaderMultiplier", 0, 2, 0.1)
.identify("Shader Multiplier")
.onChange((worth: quantity) => {
this.allMedias.forEach((media) => {
media.materials.uniforms.uShaderMultiplier.worth = worth;
});
});
}
}
Replace GLMedia.ts to make use of these values:
export class GLMedia {
parallaxIntensity = 0.4;
createMesh() {
this.materials = new THREE.ShaderMaterial({
uniforms: {
uTexture: { worth: null },
uResolution: { worth: new THREE.Vector2(/* ... */) },
uImageResolution: { worth: new THREE.Vector2(1, 1) },
uParallax: { worth: 0 },
uUvScale: { worth: 0.85 },
uShaderMultiplier: { worth: 1.0 },
},
// ... shaders
});
}
updateParallax(scroll: quantity) {
// ... current code
const parallaxValue = distance * this.parallaxIntensity;
this.materials.uniforms.uParallax.worth = parallaxValue;
}
}
And replace the fragment shader to make use of the uniforms:
uniform float uParallax;
uniform float uUvScale;
uniform float uShaderMultiplier;
void principal() {
vec2 uv = coverUv(vUv, uResolution, uImageResolution);
uv.x += uParallax * uShaderMultiplier;
uv -= 0.5;
uv *= uUvScale;
uv += 0.5;
vec3 col = texture2D(uTexture, uv).rgb;
gl_FragColor = vec4(col, 1.);
}
Now you may tweak values in real-time and see precisely how they have an effect on the parallax:
- Parallax Depth: Controls how a lot JS calculates (the space multiplier)
- UV Scale: Controls the buffer area (decrease = more room for motion)
- Shader Multiplier: Controls how a lot the shader shifts UVs
Mess around! You’ll shortly perceive the connection between these values and the visible impact.
(Non-compulsory) Train: Including Contact Help
Wish to take this additional? Right here’s a problem: add contact/drag help so customers can scroll the gallery by swiping.
Hints:
- Pay attention for
touchstart,touchmove, andtouchendoccasions - Monitor the contact delta and add it to
scroll.goal - Don’t overlook to deal with
pointerdown,pointermove, andpointerupfor each mouse and contact - Add momentum/inertia for a extra pure really feel
This can make the gallery work fantastically on cellular gadgets and add an additional layer of interactivity.
Conclusion
After I began constructing this gallery, I needed to transcend the everyday “right here’s the way to do horizontal scroll with parallax” tutorial. There are already loads of these on the market, they usually principally give attention to the DOM-based method.
What impressed me, and what I hope this tutorial conveys, is how efficiency and method can essentially change the texture of a web site. The distinction between a janky 30fps parallax and a buttery {smooth} 60fps GPU accelerated one isn’t simply technical, it’s visceral. Customers really feel it instantly, even when they will’t articulate why.
Camille Mormal’s portfolio is an ideal instance of this. The fluidity isn’t unintended, it’s the results of rendering all the pieces in WebGL and letting the GPU do what it does greatest. That’s the method I needed to interrupt down right here: not simply “the way to make it work,” however “the way to make it really feel proper.”
We’ve coated:
- Customized {smooth} scrolling with lerp and easing
- 2D parallax with CSS sizing and JavaScript transforms
- The vital connection between CSS buffer area and JavaScript motion
- WebGL fundamentals with DOM/3D synchronization
- Shader-based parallax that runs solely on the GPU
- Efficiency issues and when to optimize
The 2D model is nice for easy galleries. The WebGL model is for whenever you need to push additional whenever you need that further fluidity, whenever you need to deal with dozens of pictures with out breaking a sweat, or whenever you need to add extra complicated shader results down the road.
I hope this deep dive into each approaches provides you not simply copy paste code, however an actual understanding of what’s taking place underneath the hood. That’s what issues. That’s what permits you to adapt these methods to your personal initiatives and push them even additional.
Now go construct one thing stunning.


