Editor’s word: In case you’ve been world wide of internet graphics, you in all probability know Hector Arellano, a.okay.a Hat—a developer who’s spent years pushing the bounds of what’s doable within the browser. We invited him to share his journey, a 13-year journey by fluid simulations, from the early days of WebGL hacks to the breakthroughs enabled by WebGPU. That is greater than a technical deep dive—it’s a narrative of persistence, experimentation, and the evolution of graphics on the net.
Notice that the demo depends on WebGPU, which isn’t supported in all browsers. Please make sure you’re utilizing a WebGPU-compatible browser, equivalent to the most recent variations of Google Chrome or Microsoft Edge with WebGPU enabled.
And now, right here’s Hector to inform his story.
Earlier than you begin studying… go and get a drink, that is lengthy.
13 years in the past…
I used to be in entrance of my laptop staring on the display, minding my very own enterprise (bored), when an excellent pal of mine known as Felix instructed me, very critical and exited, that there was a new demo from the Gathering Occasion being launched. It had fluid simulations, particles animations, superb shading options and above all… it was stunning, actually stunning.
Again then, WebGL was comparatively new, delivering 3D-accelerated graphics to the browser, and it appeared like it will open many doorways for creating compelling results. Naively, I believed WebGL may very well be an excellent candidate for making one thing just like the demo Felix had simply proven me.
The problem was that after I began studying about how that demo was made, I confronted a harsh fact—it had graphics API options I had by no means heard of: “Atomics,” “Oblique Draw Calls,” “Oblique Dispatch,” “Storage Buffers,” “Compute Shaders,” “3D Textures.” These have been options of a contemporary graphics API, however none of these superior capabilities existed in WebGL.
Not solely that, however the demo additionally used algorithms and strategies that sounded extremely advanced—”Smoothed Particle Hydrodynamics (SPH)” to drive particle animations, “histopyramids” for stream compaction (why would I want that?), “marching cubes (on the GPU)” (triangles from particles???), and lots of different options that appeared utterly past my understanding.
I didn’t know the place to begin, and to make issues worse, Felix guess me that there was no approach fluids like that may very well be accomplished within the browser for manufacturing.
10 Years In the past…
Three years had handed since my dialog with Felix about fluid simulation, and he instructed me there was one more superb demo I needed to watch. Not solely did it characteristic the earlier fluid simulations, nevertheless it additionally rendered the geometries utilizing a real-time ray tracer—the supplies have been spectacular, and the outcomes have been gorgeous.
The demo was displaying raytracing in a approach that appears so actual. After all now the problem was not solely to have the ability to simulate fluids, I additionally wished to render them with a ray tracer to get these good reflection and refraction results.
It took me round 3 years to grasp every little thing and with the ability to hack my approach with WebGL to copy the issues that may very well be accomplished with a contemporary API. Efficiency was a limiting issue however I used to be in a position to run particles simulations utilizing SPH, which behaved like fluids, and I used to be additionally in a position to create a mesh from these particles utilizing marching cubes (see determine 1).
There was no atomics however I may separate the info within the RGBA channels from a texture utilizing a number of draw calls, there have been no storage buffers nor 3D textures, however I may save knowledge in textures and replicate the 3D textures utilizing 2D layers. There have been no oblique draw calls however I may simply launch an anticipated quantity of draw calls to generate the data essential, or to attract the anticipated quantity of triangles. There have been no compute shaders however I may make GPGPU computing utilizing the vertex shader to reallocate knowledge, couldn’t save a number of reminiscence positions inside a buffer, however a minimum of I used to be in a position to generate an acceleration construction within the GPU.
The implementation was working nevertheless it was not as remotely stunning as that authentic demo (Felix instructed me it was merely “ugly”… it was ugly, you possibly can see the leads to the determine 2), actually it was simply displaying methods to hack issues. I didn’t know a lot of distance fields or making a shading extra attention-grabbing than the same old phong shading.
The efficiency restricted a lot of what may very well be accomplished when it comes to ambient occlusion or extra advanced results to render fluids, like reflections or refractions, however a minimum of I may render one thing.
7 Years In the past…
Three extra years and I made some progress implementing a hybrid ray tracer too; the concept was to make use of the marching cubes to generate the triangles after which use the raytracer to guage secondary rays which might be used for reflection and refraction results, I used to be additionally ready to make use of the identical ray tracer to traverse the acceleration construction and implement caustic results. All of this following the concepts and ideas from Matt Swoboda who was the unique creator of these earlier demos. Really most of my work was principally to take his concepts and attempt to make them work in WebGL (good luck with that).
Outcomes have been good visually (check out determine 3) however I wanted a extremely good GPU to make it work. Again within the time I used to be working with a NVidia 1080GTX GPU which signifies that even when it was possible in WebGL it was not going to have the ability to be used for manufacturing. There was no approach a cellular system, or perhaps a respectable laptop computer, was going to deal with it.
It was actually irritating to see “outcomes” however not be capable to use them in an actual mission. In truth, my morale was low—I had spent a lot time attempting to realize one thing, nevertheless it didn’t prove as I had hoped. On the very least, I may use that codebase to proceed studying new options and strategies.
So I ended… And Felix received the guess.
It is a actually lengthy introduction for a tutorial, however I wished to place issues in context, generally you may assume {that a} demo or impact may be accomplished pretty rapidly, however the actuality is that some issues take years to even make it possible, it takes time to study all the specified strategies, and also you may depend on the concepts from different folks to make issues exercise… or not.
WebGPU Enters the Scene
Bear in mind all these fancy phrases and strategies from the fashionable graphics API? Seems that WebGPU relies on fashionable API requirements, which signifies that I didn’t should depend on hacks to implement all of the concepts from Matt Swoboda, I may use compute shaders to work together with storage buffers, I may use atomics to avoid wasting indices for neighbourhood search and stream compaction, I may use dispatch oblique to calculate the calculate simply the mandatory quantity of triangles and in addition render them.
I wished to study WebGPU and determined to port all of the work from the fluids and perceive the brand new paradigm, so making a small demo may assist me discover ways to work with the brand new options, methods to take care of the pipelines and bindings, methods to deal with reminiscence and handle assets. It won’t work for manufacturing, however it will assist me to study WebGPU in a deeper degree.
In truth… the demo for this text isn’t appropriate for “manufacturing”, it would work at 120 fps on good MacBook Professional machines just like the M3 Max, it might work at 60fps on a MacBook Professional M1 Professional, and it might render at 50fps on different good machines… Put this factor in a MacBookAir and your desires of fluid simulation will fade in a short time.
So why is this convenient then?
Seems that this straightforward instance is a group of strategies that can be utilized on their very own, this simply occur to be a wrapper that makes use of all of them. As a developer you may be excited by animating particles, or producing a floor from a possible to keep away from ray marching, or with the ability to render oblique lighting or world area ambient occlusion. The hot button is to take the code from the repository, learn this text and take the elements you have an interest in your mission to construct your personal concepts.
This demo may be separated into 4 completely different mayor steps that are:
- Fluid simulation: this step is accountable for driving the fluid animation utilizing particles simulations primarily based on place primarily based dynamics.
- Geometry era: this step creates the rendering geometry (triangles) from the particles simulation utilizing the marching cubes algorithm within the GPU.
- Geometry rendering: this step renders the earlier triangles generated, the displayed materials makes use of distance fields to guage the thickness of the geometry for subsurface scattering, and voxel cone tracing to calculate the ambient occlusion.
- Composition: this step is accountable to create the blurred reflections on the ground, implement the colour correction, and the making use of the bloom impact used to boost lighting.
Fluid Simulations
A few years in the past in the event you wished to be a part of the cool children doing graphics you needed to present that you can make your personal fluid simulation, in the event you made simulations in 2D you have been thought of a extremely good graphics developer… in the event you made them in 3D you gained the “god standing” (please take into accounts that every one of this occurred inside my head). Since I wished that “god standing” (and wished to win the guess) I began studying all I may about methods to make 3d simulations.
Seems that there are various methods to do it, amongst them there was one known as “Smoothed Particles Hydrodynamics ” (SPH), after all I may do the suitable factor (act rational) and verify which sort of simulation can be extra appropriate for the net, however I took this path as a result of the title did sound so cool in my head. This system works over particles, which turned out to be actually helpful in the long run as a result of I ended switching the SPH algorithm for place primarily based dynamics.
You possibly can perceive SPH utilizing some analogies when you have labored with steering behaviours earlier than.
Seems that Three.js has many superb examples in regards to the flocking algorithm which relies on steering behaviours. Flocking is the results of integrating the attraction, alignment and repulsion steering behaviours. These behaviours are blended with cosine capabilities, which resolve the kind of behaviour every particle will obtain primarily based on the space among the many different particles surrounding them.
SPH works similarly, you consider the density of every particle and this worth is used to calculate the strain utilized to every one. The strain impact may be thought of just like the attraction / repulsion results from the flocking, that means that it really works making the particles get nearer or farther relying on the density for the SPH.
The attention-grabbing factor is that the density of every particle is a perform of the space among the many surrounding particles, which signifies that the strain utilized is an oblique perform from the space. Because of this the 2 varieties of simulations may be thought of “comparable”. SPH has a unified strain impact, that modifies positions, primarily based on densities which depends on distances. The flocking simulation depends on points of interest and repulsions behaviours, used to switch positions, that are capabilities primarily based on distances too.
The viscosity time period within the SPH simulations may be analogous to the alignment time period from the flocking, each steps align the speed of every particle to the speed of the environment, which is principally checking the distinction between the typical velocity subject and the speed of the particle evaluated.
So to (over) simplify issues you possibly can consider SPH of a technique to setup flocking with bodily right values to make these behaviours make your particle behave like… fluids. It’s true that it will require extra steps just like the floor rigidity, and I’m forsaking the idea of mixing capabilities in SPH, but when you may make flocking work… you may make SPH work too.
One other factor to think about from the flocking algorithm is that it has O(n^2) complexity, that means that it will get actually sluggish when coping with a whole lot of particles, since every particle must verify its relationship with all the opposite particles from the simulation.
Flocking and SPH want an acceleration construction that enables to allocate solely the closest particles inside a variety, this avoids checking all of the particles from the simulation and makes the complexity go from O(n^2) to O(okay*n) the place Okay is the quantity of particles to verify. This acceleration may be accomplished utilizing a daily voxels grid which retailer as much as 4 particles allotted inside every voxel.
The algorithm can verify as much as 108 particles, evaluating as much as 4 of them for all of the 27 voxels surrounding the particle to replace, this may sound like a whole lot of particles to guage, nevertheless it’s a lot better than evaluating the unique 80.000 particles used on this instance.
Traversing the neighbourhood may be fairly costly, and the SPH algorithm requires a number of passes over all of the particles, it’s worthwhile to calculate the density for all of the particles, then the strain and displacement for all of them, one other cross is required for the viscosity and a fourth cross for the floor rigidity… Efficiency can turn out to be one thing to think about whenever you realise that you simply is likely to be utilizing all of the processing energy of the GPU to drive the particles.
SPH additionally requires a whole lot of tweaking, similar to the flocking algorithm, and the tweaking has to accomplished utilizing bodily right parameters to make one thing visually compelling. You find yourself attempting to grasp many engineering parameters which makes issues exhausting, generally actually exhausting.
Fortunately NVidia did launch a unique strategy for physics dynamics over particles known as Place Primarily based Dynamics. These are a gaggle of various particles simulations, which embody (amongst others):
- inflexible physique
- tender physique deformations with form matching
- fluids simulations
- particles collisions
The positions primarily based dynamics modify the particles’s positions utilizing constrains that are the restrictions that govern the motion of every particle for every sort of simulation. The outcomes are very secure and it’s a lot simpler to tweak than SPH. This made me swap from SPH to PBF (place primarily based fluids). The idea is comparable, however the principle distinction is that the PBF depends on constrains to outline the displacements for every particle as an alternative of calculating densities.
PBF makes issues simpler because it removes many bodily ideas and exchange them with dimensionless parameters (think about the Reynolds quantity however approach simpler to grasp).
There’s one caveat although… place primarily based dynamics use an iterative technique for each step, that means that you’d have to calculate the constrains, apply the displacements and calculate viscosity greater than 2 occasions to have a pleasant consequence. It’s extra secure than SPH, nevertheless it’s really slower. That being mentioned… in the event you perceive flocking… you perceive SPH. In case you perceive SPH then you definitely’ll discover PBF a breeze.
Sadly I don’t need to render simply particles, I need to render a mesh, which requires to make use of the GPU to calculate the triangles and render them accordingly, this implies I should not have the luxurious to make use of the GPU to calculate a number of steps in an iterative vogue… I wanted to chop corners and simplify the simulation.
Fortunately place primarily based dynamics supply a really low cost technique to consider particles collisions, it solely requires a single cross when you apply the specified forces over the particles, so I made a decision to make use of gravity as the principle power, implement the curl noise as a secondary power to supply some fluidity really feel to the particles, embody a really sturdy repulsion power pushed by the mouse, and let the collisions do the magic.
The curl and the gravity will present the specified “fluid impact”, and the collisions keep away from that the particles group in bizarre clusters. it will not be nearly as good as PBF however it will be a lot sooner to calculate. The following video reveals a demo of the ensuing impact.
The implementation solely requires a single cross to use all the specified forces to the particles, this cross can also be accountable to generate the grid acceleration construction inside a storage buffer; atomics are used to jot down the specified particle index to every reminiscence tackle which solely required only a few strains of code. You possibly can learn the implementation of the forces and the grid acceleration contained in the PBF_applyForces.wgsl shader from the repository.
The particles positions are up to date utilizing one other shader known as PBF_calculateDisplacements.wgsl, this shader is accountable to calculate the collisions traversing the neighbourhood, and in addition consider the collisions of the particles with the surroundings (the invisible bounding field).
The corresponding pipelines and bindings are outlined contained in the PBF.js module, all of the simulation makes use of solely three shaders, the forces utility, the displacements updates and at last the speed integration, which can also be one other a part of the place primarily based dynamics. As soon as the positions are up to date the ultimate velocities are calculated utilizing the distinction between the brand new place and the earlier place.
This final shader known as PBF_integrateVelocity.wgsl can also be used to setup the 3d texture that comprises all of the particles which will probably be used to calculate a possible subject used for the marching cubes algorithm.
Marching Cubes (Geometry Technology)
The primary time I received the particles working with the SPH I received so excited that I spent a couple of days bragging about it within the workplace (effectively, simply in all places), it was an okay consequence however my ego was by the roof… Fortunately I used to be working with Felix who had simply the suitable medication for it, he knew that the one approach for me to cease bragging was to begin working once more; so he pushed me to begin implementing the floor era to render the fluids as liquids, not similar to particles.
I didn’t actually know the place to begin, there have been completely different choices to render surfaces from particles, amongst them there are the next ones:
- Level Splatting
- Raymarching
- Marching Cubes
Level splatting is the simplest and quickest technique to generate a floor from a particles subject, it’s a display area impact that renders the particles and makes use of separable blur and depth info to generate the normals from the rendered particles. Outcomes may be fairly convincing and you may make many results, even caustics. To be sincere it’s the finest answer for actual time.

Raymarching may be very attention-grabbing within the sense that it permits advanced results like reflections and refractions with a number of bounces, nevertheless it’s actually sluggish efficiency smart, it’s important to generate a distance subject from the particles after which traverse that subject which requires to make software program trilinear interpolations, there have been no 3d textures after I began engaged on it. And even with {hardware} trilinear interpolation the efficiency isn’t excellent. It’s superb visually however not an excellent answer for actual time.
Marching Cubes did sound like an attention-grabbing strategy, the concept is to generate a mesh from a possible subject generated from the particles. The great half is that the mesh may be rasterised, which signifies that it may be rendered over excessive display resolutions, and you may also use the mesh to make reflections results “without cost” like the present instance. You embody the mesh into the scene with out worrying about methods to combine the consequence just like the earlier two choices.
Three.js did have some examples utilizing Marching Cubes however the floor was generated within the CPU and the particles’s knowledge is allotted within the GPU, so I began studying about Matt Swoboda’s presentation about how he managed to implement the marching cubes algorithm within the GPU. Sadly there have been many steps that I wanted to grasp.
How I may generate a possible from a particles subject?, What was he speaking about when he talked about oblique dispatch? How I may really generate the triangles utilizing the GPU? there have been too many questions that saved me busy and freed Felix from listening me bragging once more.
Let’s discuss in regards to the completely different steps required for the entire implementation, you possibly can learn the Marching Cubes principle in right here. To begin with, the marching cubes algorithm is a technique to create an iso floor from a possible subject, which signifies that an important factor is to generate the required potential from the particles. The following step is to guage the potential over a voxel grid; the concept is to verify the potential worth over every voxel, and use this worth as an enter to check it in opposition to one of many 256 doable triangle mixtures that may be generated utilizing the marching cubes, which outline 0 to as much as 5 triangles to create inside every voxel.
Within the CPU that is easy since you possibly can place the voxels allotted inside an array and generate the triangles only for these voxels. The GPU has the voxels scattered inside a 3d texture so you need to use atomics to reallocate them inside a storage buffer, all it’s important to do is to extend a reminiscence place index atomically to setup the all info contiguously within the buffer. Lastly one final step makes use of the voxels info gathered to generate the required triangles.
With the roadmap outlined let’s get deeper into every step.
Potential Technology
You probably have examine level splatting method you’ll discover {that a} blur step is used to clean the completely different factors to generate some form of display area floor, this answer may be additionally used with a 3d texture to generate the potential. The concept is to easily apply a 3d blur which might leads to a “poor’s man” distance subject.
You possibly can additionally use the bounce flood algorithm to generate a extra right distance subject from the particles, so let’s focus on the 2 choices rapidly to grasp why the blur is an efficient answer.
The bounce flood is a good technique to calculate distance fields, even for particles, it is vitally exact within the sense that it does present the space to every particle considered. It additionally appears to be extra performant than making use of a 3d blur over a 3d texture, however there’s one caveat that didn’t make this the perfect answer… It’s too good.
The consequence from this algorithm reveals the space over a gaggle of spheres that are related relying on the brink used to outline the iso floor, this doesn’t clean the end in a satisfying approach. You want an enormous quantity of particles that, even with particles, it appears to be like like a floor, and in the event you’re in that state of affairs then it’s higher to make use of level splatting.
The blurring however is smoothing and spreading the particles to behave extra like a floor, principally eradicating the excessive frequency results of particles, the consequence will probably be smoother with greater blurring steps. It provides you extra management over the ultimate floor than the bounce flood algorithm. Weirdly sufficient this straightforward strategy is definitely sooner and extra performant too. You may as well apply completely different blurring strategies and mix the consequence to have completely different floor outcomes.
The blur implementation is finished utilizing a compute shader known as Blur3D.wgsl which is dispatched 3 occasions over the three completely different axis, the bindings and compute dispatch calls are outlined contained in the Blur3D.js file. I separated the potential era into an remoted perform since I wished to review the potential era and examine the Soar Flood outcomes from the 3D blur outcomes. This additionally allowed me to setup timestamps queries to verify which answer can be extra performant.
Checking voxels
I take advantage of one other compute shader to verify which voxels would be the ones accountable for the triangles era as soon as the potential is created. The repository has a compute shader known as MarchCase.wgsl, this shader is dispatched over the entire voxels grid signalling the voxels that require to generate triangles inside them. It makes use of atomics to allocate the the 3d place of the voxel and the marching dice case for that corresponding voxel contiguously inside a storage buffer.
The EncodeBuffer.wgsl compute shader is used to learn the entire quantity of voxels from the earlier step and setup the oblique dispatch name for the quantity of vertices to make use of for the triangles era. It additionally encode the oblique draw to dispatch the quantity of triangles to attract.
Triangles Technology
The shader accountable for that is known as GenerateTriangles.wgsl , this shader makes use of the worldwide invocation index from every thread to outline the corresponding voxel and vertex to guage, it’s dispatched utilizing the oblique dispatch command which is setup with the encoded buffer created utilizing the EncodeBuffer.wgsl shader.
The voxel info is used within the shader to calculate linear interpolations among the many corners of every edge from the voxel to allocate the brand new vertex place the place the sting is outlined with the march case. The conventional is calculated the linear interpolation of the gradient for every nook of the corresponding edge.
The completely different steps, potential era, voxels retrieval and triangles era are outlined contained in the generateTriangles perform from the TrianglesGenerator.js file. This earlier perform is known as at any time when the particles simulation is resolved and new positions are generated.
Rendering
One of many massive errors I’ve accomplished over time is to assume that the simulations or GPGPU strategies have been extra essential than visible aesthetics, I used to be so involved with displaying up that I may make advanced issues that I didn’t take note of what was the ultimate consequence.
Through the years Felix at all times tried to cease me earlier than releasing a demo of the issues I used to be engaged on, many occasions he tried to persuade me that I ought to spend extra time sharpening visuals, to make it extra pleasing, not only a technical factor that solely 4 guys would respect.
Belief me on this one… you may make superb simulations with physics and loopy supplies, but when it appears to be like like crap… it it simply crap.
The problem with fluids simulations is that you’re spending a whole lot of GPU time doing the place primarily based dynamics and the floor era, so that you don’t have too many assets to place good rendering results on prime of it. Your timing funds additionally must account for the remainder of the issues which might be included in your scene, so fluids, basically, are usually not one thing you are able to do with an incredible visible high quality in actual time.
The best choice to render liquids in actual time is to make use of level splatting, it lets you render the fluids with reflections, refractions, shadows and caustics too, the outcomes may be fairly convincing and they are often accomplished actually “low cost” when it comes to efficiency. In case you don’t belief me check out this demo which is superb, implementing the purpose splatting method https://webgpu-ocean.netlify.app
For non clear / translucent liquids like portray then marching cubes is an efficient strategy, you need to use a PBR materials and get very nice visuals, and the perfect half is that it will get to be built-in in world area, so that you don’t have to fret an excessive amount of with integration with the remainder of the scene.
For the scope of this demo I wished to discover how I may make issues attention-grabbing visually in a approach that I may explote the truth that I’ve a voxel construction with the triangles, and the potential that generate these triangles which may very well be used as a distance subject.
The very first thing I explored was to implement ambient occlusion with Voxel Cone Tracing (VCT), seems that the VCT algorithm requires to voxelize the triangles inside a voxel grid, however the present demo is doing issues the opposite approach round, it’s utilizing the marching cubes to generate triangles from a voxel grid. Because of this one a part of the VCT algorithm is already carried out within the code.
All I’ve to do is to replace the MarchCase.wsgl compute shader to replace the voxel grid organising the voxels knowledge with a discretisation technique, the place the voxels with triangles are marked as 1, whereas the voxels with no triangles are marked with 0 for the occlusion. I additionally marked with 0.5 all of the voxels which might be beneath a sure peak to simulate the ambient occlusion of the ground. It solely took two extra strains of code to setup the VCT info.
As soon as the voxel grid is up to date I solely have to implement a mipmapping cross for a 3d texture which is finished utilizing the MipMapCompute.wgsl compute shader, the mipmap bindings are outlined contained in the CalculateMipMap.js file. The outcomes may be seen within the subsequent video.
Discover that within the earlier video I’m additionally rendering ground reflections, that is easy to implement with marching cubes since I have already got the triangles for the mesh, all I’ve to do is to calculate the reflection matrix and render the triangles two occasions. This may be far more costly if I attempt to render the identical consequence utilizing ray marching.
Outcomes have been attention-grabbing and I nonetheless had some GPU funds so as to add extra options within the materials, which made me ask round too see what may very well be an attention-grabbing factor to do. One pal instructed me that it will be superb to implement subsurface scattering for the fabric just like the picture beneath.

Subsurface scattering is one in all results that, effectively accomplished, can improve visuals very similar to reflections and refractions, it’s fairly spectacular and sort of difficult too. The rationale for being troublesome to implement in some circumstances is that it requires to know the thickness of the geometry to setup how a lot gentle will probably be scattered from the sunshine supply.
Many subsurface scattering demos use a thickness texture for the geometry, however for fluids it will not be doable to have thickness baked. That is the difficult half, gathering the thickness in actual time.
Fortunately the demo is creating a possible which can be utilized as a distance subject to retrieve the thickness in actual time for the floor, the idea is fairly comparable than the ambient occlusion implementation accomplished by Iñigo Quilez. He makes use of ray marching over the space subject to verify how shut is the floor from the ray fired on each step of the marching course of, this manner he can verify how the geometry can occlude the sunshine acquired from the purpose that fires the ray.
I made a decision to do the identical factor, however firing the rays contained in the geometry, that approach I may see how the geometry occludes gentle touring contained in the mesh, thus displaying me the areas the place the sunshine wouldn’t journey freely, avoiding the scattering. Outcomes have been actually promising as you possibly can see within the subsequent video.
The fabric for the geometry is outlined contained in the RenderMC.wgsl file, it implements the vertex shader which makes use of the storage buffers that comprise the positions and normals for the vertices of the triangles, the geometry is rendered utilizing an oblique draw command utilizing the storage buffer encoded with the EncodeBuffer.wgsl compute shader because the CPU has no info of the quantity of triangles generated with the marching cubes.
The bindings are generated to make use of two completely different matrices to render the geometry two occasions, one for the common view and the opposite matrix is used for the reflection geometry, all of that is accomplished contained in the Essential.js file.
Up to now the simulation is working, the floor is generated and there’s a materials carried out for the floor, now it’s time to assume when it comes to composition.
Composition
So that you may assume you might be nice graphics developer, you might be working with Three.js, Babylon.js or Playcanvas.js and doing cool visuals… Really you is likely to be an incredible developer and also you’re doing issues by yourself, additionally making cool visuals…
Let me let you know one thing… I’m not.
How do I do know that?
Nicely… I used to be fortunate sufficient to work at Energetic Idea (https://activetheory.internet/) with superb graphics builders and 3d artists who confirmed my limitations and in addition helped me to maneuver ahead with the tip product I used to be delivering. If there’s something you are able to do for your self and your profession is to attempt to work with them, belief me, you’ll study many issues that can enhance your work in methods you by no means imagined.
Amongst these issues… Composition is every little thing!
So I requested for assist from Paul-guilhem Repaux who I used to work with at Energetic Idea (https://x.com/arpeegee) to assist me with the composition since I do know it isn’t my strongest attribute.
By way of composition he identified that the earlier movies examples present some deficiencies that must be solved:
- The reflection on the ground is just too effectively outlined, it will be helpful to have some roughness on the ground reflection.
- The black background doesn’t replicate the place the sunshine comes from. The background ought to replicate a greater temper to make it playful.
- There aren’t any gentle results that combine the geometry with the surroundings.
- The composition ought to have a justified transition between letters.
- The composition requires coloration correction.
And belief me, there are various extra issues that may very well be improved, Paul was simply variety sufficient to pinpoint simply the vital issues.
Reflections
The primary problem may be solved with submit processing, the concept is to use a blur on the reflection utilizing the space from the geometry to the bottom to setup the depth of the blur. The farther the geometry is from the flor the extra intense the blur utilized, that can present a roughness impact.
The one problem with this answer is that blurring will solely be utilized within the areas the place there’s geometry since it’s the place the peak is outlined, which means there will probably be no blurring within the environment from the geometry which makes the consequence look bizarre.
To beat the earlier problem one pre processing cross is finished the place an offset from the mirrored geometry is saved inside a texture, this offset saves the closest peak worth from the geometry to be able to outline what quantity of blurring ought to be utilized within the empty surrounded area from the mirrored geometry. The following video shows the offset cross.
The darkish purple geometry represents the non mirrored geometry, whereas the inexperienced fragments characterize the mirrored geometry together with the offsetting, discover that the inexperienced reflection is thicker than the purple one. As soon as the offset texture is created the result’s utilized in a submit processing cross blurring solely the areas outlined by the offset in inexperienced. The peak is encoded within the purple channel the place you possibly can visualise the peak from the ground as a gradient.
Background and Lighting
The subsurface scattering is assuming that the lighting comes from behind the geometry on each second, even with the digital camera actions the sunshine appear to come back from the again, in any other case the subsurface scattering impact received’t be so noticeable.
That’s really actually helpful when it comes to lighting because the background can apply a gradient that represents a lightweight supply allotted behind the geometry justifying the sunshine route coming from behind the geometry. The background coloration also needs to have an analogous coloration than the fabric to have a greater lighting integration which is one thing straightforward to do as you possibly can see within the subsequent video.
Lighting Integration
The very last thing to do is to supply some lighting integration between the background and the geometry, the backlight outlined with the background gradient justifies how the subsurface scattering is carried out, however the ultimate consequence may be enhanced utilizing a bloom impact. The concept is to make use of the bloom to supply a halo that’s stronger when the geometry is thinner, thus making the subsurface scattering impact a lot stronger as seen within the subsequent video.
In case you take deeper take a look at the earlier video you’ll discover that I additionally explored methods to match the letters animations with the codrops brand, this was accomplished animating every letter of the brand to reference it with the liquid letter. The concept was discarded as a result of it appeared like a kids’s utility to discover ways to learn.
Transitions
Transitions are essential within the sense that present the timing for the interactions, the idea behind the transitions is to boost the concept of the letters mutating one way or the other, that made me work with several types of transitions. I attempted the liquid floating with no gravity and the forming the brand new letter as displayed within the subsequent video.
Additionally tried one other transition the place the letters can be generated by a guided move, as you possibly can see beneath.
None of these transitions have been making sense in my head as a result of there was no idea behind it, so I began enjoying the the concept falling due to the phrase ‘drops’ from “codrops” and issues began to fall in place. You possibly can see how the letters are transitioning within the subsequent video.
The following movies additionally present how I used to be attempting to implement the identical falling transition for the background, the motivation was to boost the concept of every little thing falling to transition to the brand new letter. I did attempt many alternative background transitions as you possibly can see. Additionally examined several types of letters too.
The earlier background transition was discarded as a result of it appeared very similar to the outdated “scanline” renderers from 3dsMax.
The concept behind the earlier background transition is that the brand new letter is construct by the columns elevating it from the falling liquid. It was discarded as a result of if affected an excessive amount of visually with the interactivity of the letter and the person.
Colour Correction and Temper
I additionally added brightness, distinction and gamma correction for the ultimate consequence the place the temper is setup deciding on a heat coloration palette for the background and the letters. All of the submit processing is finished utilizing completely different compute shaders that are known as contained in the Essential.js file.
Browse the full code base. For a simplified model, take a look at this repo. You possibly can change the phrase proven within the demo through the use of /?phrase=one thing
in the long run of the demo URL.
Some Last Phrases
There are a lot of issues I didn’t discuss like optimisation and efficiency, however I contemplate it pointless since this demo is supposed to run on good GPUs, not on cellular units. WebGPU has timestamp queries which makes fairly straightforward to search out bottlenecks and make issues extra performant, you could find how to take action studying the Blur3D.js file, it has the queries commented.
This doesn’t imply that you need to use this type of work for manufacturing, Felix did handle to make an excellent exploration of the SPH with letters which may be very performant and it’s additionally actually cool, check out the subsequent video to test it out.
So to wrap it up all I can say is that in spite of everything these years Felix continues to be profitable the guess, and I’m nonetheless attempting to alter the end result… Simply hope you get to satisfy somebody who makes you say “maintain my beer”.