Hey everybody, I’m Jorge Toloza, Freelance Inventive Developer based mostly in Colombia, It’s a pleasure for me to have the possibility to encourage, train and study in collaboration with Codrops.
On this tutorial, we’ll create a progressive gradient blur impact that dynamically modifications based mostly on the place of photographs. We’ll use CSS to place the weather after which receive the coordinates for the objects in WebGL.
Many of the logic is on the shader, so you possibly can translate this impact to another WebGL library like Three.js, and even obtain it in vanilla WebGL.
HTML construction
First we have to outline our picture container, It’s fairly easy; we’ve a <determine>
and an <img>
tag.
<determine class="media">
<img src="/img/8.webp" alt="tote bag">
</determine>
Kinds
The kinds are actually easy too, we’re hiding our photographs as a result of they are going to be rendered on the canvas.
.media {
img {
width: 100%;
visibility: hidden;
}
}
GL Class
Now within the JS, we’ve a few vital courses. The primary one is GL.js
, which comprises many of the logic to render our photographs utilizing OGL.
import { Renderer, Digital camera, Remodel, Airplane } from 'ogl'
import Media from './Media.js';
export default class GL {
constructor () {
this.photographs = [...document.querySelectorAll('.media')]
this.createRenderer()
this.createCamera()
this.createScene()
this.onResize()
this.createGeometry()
this.createMedias()
this.replace()
this.addEventListeners()
}
createRenderer () {
this.renderer = new Renderer({
canvas: doc.querySelector('#gl'),
alpha: true
})
this.gl = this.renderer.gl
}
createCamera () {
this.digital camera = new Digital camera(this.gl)
this.digital camera.fov = 45
this.digital camera.place.z = 20
}
createScene () {
this.scene = new Remodel()
}
createGeometry () {
this.planeGeometry = new Airplane(this.gl, {
heightSegments: 50,
widthSegments: 100
})
}
createMedias () {
this.medias = this.photographs.map(merchandise => {
return new Media({
gl: this.gl,
geometry: this.planeGeometry,
scene: this.scene,
renderer: this.renderer,
display screen: this.display screen,
viewport: this.viewport,
$el: merchandise,
img: merchandise.querySelector('img')
})
})
}
onResize () {
this.display screen = {
width: window.innerWidth,
top: window.innerHeight
}
this.renderer.setSize(this.display screen.width, this.display screen.top)
this.digital camera.perspective({
facet: this.gl.canvas.width / this.gl.canvas.top
})
const fov = this.digital camera.fov * (Math.PI / 180)
const top = 2 * Math.tan(fov / 2) * this.digital camera.place.z
const width = top * this.digital camera.facet
this.viewport = {
top,
width
}
if (this.medias) {
this.medias.forEach(media => media.onResize({
display screen: this.display screen,
viewport: this.viewport
}))
this.onScroll({scroll: window.scrollY})
}
}
onScroll({scroll}) {
if (this.medias) {
this.medias.forEach(media => media.onScroll(scroll))
}
}
replace() {
if (this.medias) {
this.medias.forEach(media => media.replace())
}
this.renderer.render({
scene: this.scene,
digital camera: this.digital camera
})
window.requestAnimationFrame(this.replace.bind(this))
}
addEventListeners () {
window.addEventListener('resize', this.onResize.bind(this))
}
}
For people
We import some vital objects, get all the pictures from the DOM, then create the Media
objects for every picture. Lastly, we replace the medias and the renderer within the replace
methodology utilizing requestAnimationFrame
.
Media class
That is the cool one. First, we set all of the choices, then we set the shader and look forward to the feel to set the uImageSize
uniform. The mesh will likely be a Airplane
. Within the onResize
methodology, we set the place of the media within the canvas based mostly on the place we’ve within the CSS. I’m utilizing the identical method from Bizarro’s tutorial; you possibly can have a look if you wish to know extra concerning the positioning and the quilt conduct for the pictures.
import { Mesh, Program, Texture } from 'ogl'
import vertex from '../../shaders/vertex.glsl';
import fragment from '../../shaders/fragment.glsl';
export default class Media {
constructor ({ gl, geometry, scene, renderer, display screen, viewport, $el, img }) {
this.gl = gl
this.geometry = geometry
this.scene = scene
this.renderer = renderer
this.display screen = display screen
this.viewport = viewport
this.img = img
this.$el = $el
this.scroll = 0
this.createShader()
this.createMesh()
this.onResize()
}
createShader () {
const texture = new Texture(this.gl, {
generateMipmaps: false
})
this.program = new Program(this.gl, {
depthTest: false,
depthWrite: false,
fragment,
vertex,
uniforms: {
tMap: { worth: texture },
uPlaneSize: { worth: [0, 0] },
uImageSize: { worth: [0, 0] },
uViewportSize: { worth: [this.viewport.width, this.viewport.height] },
uTime: { worth: 100 * Math.random() },
},
clear: true
})
const picture = new Picture()
picture.src = this.img.src
picture.onload = _ => {
texture.picture = picture
this.program.uniforms.uImageSize.worth = [image.naturalWidth, image.naturalHeight]
}
}
createMesh () {
this.aircraft = new Mesh(this.gl, {
geometry: this.geometry,
program: this.program
})
this.aircraft.setParent(this.scene)
}
onScroll (scroll) {
this.scroll = scroll
this.setY(this.y)
}
replace () {
this.program.uniforms.uTime.worth += 0.04
}
setScale (x, y)
setX(x = 0) {
this.x = x
this.aircraft.place.x = -(this.viewport.width / 2) + (this.aircraft.scale.x / 2) + (this.x / this.display screen.width) * this.viewport.width
}
setY(y = 0) {
this.y = y
this.aircraft.place.y = (this.viewport.top / 2) - (this.aircraft.scale.y / 2) - ((this.y - this.scroll) / this.display screen.top) * this.viewport.top
}
onResize ({ display screen, viewport } = {}) {
if (display screen) {
this.display screen = display screen
}
if (viewport) {
this.viewport = viewport
this.aircraft.program.uniforms.uViewportSize.worth = [this.viewport.width, this.viewport.height]
}
this.setScale()
this.setX(this.$el.offsetLeft)
this.setY(this.$el.offsetTop)
}
}
The Vertex
Quite simple implementation, we’re getting the UV and place to render the pictures.
precision highp float;
attribute vec3 place;
attribute vec2 uv;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
various vec2 vUv;
void most important() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(place, 1.0);
}
The Fragment
Okay, right here is the ultimate file; we’re nearly there.
precision highp float;
uniform vec2 uImageSize;
uniform vec2 uPlaneSize;
uniform vec2 uViewportSize;
uniform float uTime;
uniform sampler2D tMap;
various vec2 vUv;
/*
by @arthurstammet
https://shadertoy.com/view/tdXXRM
*/
float tvNoise (vec2 p, float ta, float tb) {
return fract(sin(p.x * ta + p.y * tb) * 5678.);
}
vec3 draw(sampler2D picture, vec2 uv) {
return texture2D(picture,vec2(uv.x, uv.y)).rgb;
}
float rand(vec2 co){
return fract(sin(dot(co.xy, vec2(12.9898, 78.233))) * 43758.5453);
}
/*
impressed by https://www.shadertoy.com/view/4tSyzy
@anastadunbar
*/
vec3 blur(vec2 uv, sampler2D picture, float blurAmount){
vec3 blurredImage = vec3(0.);
float d = smoothstep(0.8, 0.0, (gl_FragCoord.y / uViewportSize.y) / uViewportSize.y);
#outline repeats 40.
for (float i = 0.; i < repeats; i++) {
vec2 q = vec2(cos(levels((i / repeats) * 360.)), sin(levels((i / repeats) * 360.))) * (rand(vec2(i, uv.x + uv.y)) + blurAmount);
vec2 uv2 = uv + (q * blurAmount * d);
blurredImage += draw(picture, uv2) / 2.;
q = vec2(cos(levels((i / repeats) * 360.)), sin(levels((i / repeats) * 360.))) * (rand(vec2(i + 2., uv.x + uv.y + 24.)) + blurAmount);
uv2 = uv + (q * blurAmount * d);
blurredImage += draw(picture, uv2) / 2.;
}
return blurredImage / repeats;
}
void most important() {
vec2 ratio = vec2(
min((uPlaneSize.x / uPlaneSize.y) / (uImageSize.x / uImageSize.y), 1.0),
min((uPlaneSize.y / uPlaneSize.x) / (uImageSize.y / uImageSize.x), 1.0)
);
vec2 uv = vec2(
vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
);
float t = uTime + 123.0;
float ta = t * 0.654321;
float tb = t * (ta * 0.123456);
vec4 noise = vec4(1. - tvNoise(uv, ta, tb));
vec4 remaining = vec4(blur(uv, tMap, 0.08), 1.0);
remaining = remaining - noise * 0.08;
gl_FragColor = remaining;
}
Let’s clarify a bit of bit. First, we apply the crop within the picture to maintain the ratio:
vec2 ratio = vec2(
min((uPlaneSize.x / uPlaneSize.y) / (uImageSize.x / uImageSize.y), 1.0),
min((uPlaneSize.y / uPlaneSize.x) / (uImageSize.y / uImageSize.x), 1.0)
);
vec2 uv = vec2(
vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
);
Subsequent, we play with the time to get TV noise for our photographs utilizing the tvNoise
perform:
float t = uTime + 123.0;
float ta = t * 0.654321;
float tb = t * (ta * 0.123456);
vec4 noise = vec4(1. - tvNoise(uv, ta, tb));
For the blur, I’m utilizing the blur
perform based mostly on the implementation of the Gaussian Blur by @anastadunbar. We’re principally getting the typical relative to the present pixel and the repeats.
The vital half is the gradient variable. We’re utilizing the gl_FragCoord
and the uViewportSize
to generate a set gradient on the backside of the viewport so we will apply the blur based mostly on the proximity of every pixel to the sting.
vec3 blur(vec2 uv, sampler2D picture, float blurAmount){
vec3 blurredImage = vec3(0.);
float gradient = smoothstep(0.8, 0.0, (gl_FragCoord.y / uViewportSize.y) / uViewportSize.y);
#outline repeats 40.
for (float i = 0.; i < repeats; i++) {
vec2 q = vec2(cos(levels((i / repeats) * 360.)), sin(levels((i / repeats) * 360.))) * (rand(vec2(i, uv.x + uv.y)) + blurAmount);
vec2 uv2 = uv + (q * blurAmount * gradient);
blurredImage += draw(picture, uv2) / 2.;
q = vec2(cos(levels((i / repeats) * 360.)), sin(levels((i / repeats) * 360.))) * (rand(vec2(i + 2., uv.x + uv.y + 24.)) + blurAmount);
uv2 = uv + (q * blurAmount * gradient);
blurredImage += draw(picture, uv2) / 2.;
}
return blurredImage / repeats;
}
Then we will return the ultimate colour
vec4 remaining = vec4(blur(uv, tMap, 0.08), 1.);
remaining = remaining - noise * 0.08;
gl_FragColor = remaining;
It is best to get one thing hyperlink this:
And that’s it! Thanks for studying. You should use the feedback part if in case you have any questions. I hope this tutorial was helpful to you 🥰.
Photographs by @jazanadipatocu.