-
Notifications
You must be signed in to change notification settings - Fork 3
Fluid Rendering with Babylon.js
The fluid rendering implementation is based on the classic paper from Simon Green: Screen Space Fluid Rendering for Games. You can refer to this doc for more in-depth information as I won't always give too much details for some of the steps.
Note: in the pictures above and below, the particle positions come from a pre-computed water simulation (SphereDropGround package)
As the title says, it's a screen space technic, meaning in Babylon.js a number of 2D post processes applied after the 3D scene has been rendered into a 2D texture.
The inputs are simply a list of 3D coordinates representing the particles. Those particles can be generated from any source: a regular particle system (ParticleSystem
/ ParticleSystemGPU
in Babylon.js), a real fluid simulation (subject for another doc page) or even custom user code. As long as you are able to produce a list of 3D coordinates, you can use the fluid renderer to display them as a fluid.
The first step is to treat each particle as a sphere and project them as circles to generate their depths in a texture. Strictly speaking, a 3D sphere can project to an ellipse in a 2D perspective projection, but in practice sticking to circles work fine and is lighter/easier on computation.
Here's a diffuse rendering to better see the particles:
The depth must be a linear depth so that the blur post processes work as expected (see next step)! So, the depth texture is created by recording the view space z coordinate of each projected point of the sphere. The shaders used to generate this texture are particleDepth.vertex.glsl / particleDepth.fragment.glsl.
The normals used for the diffuse (and final) rendering are computed thanks to the depth and the uv coordinates. The view space depth as well as the uv coordinates are converted to NDC (Normalized Device Coordinate) space and these coordinates are transformed to view space by multiplying by the inverse projection matrix. We will describe the computation that converts Z from view space to NDC space a little further in the text as it is a question that comes often in the Babylon.js forums.
Blurring the depth is the important step of the technic, that's how you get the "fluidistic" look in the final rendering:
Without blurring | With blurring |
---|---|
As explained in 1 slides 20-31, a standard 2D gaussian blur won't fit the bill because you could end up blurring particles that are far away in the z dimension if you only stick to the 2D dimension of the texture. A bilateral filtering takes into account the Z depth of each texels to blur and will apply more blurring for small Z differences and less/no blurring when the Z differ too much. Also, the filtering should make sure we blur the same amount according to the distance to the camera: if particles take a sizeable part of the screen (say 200x200 pixels) because they are near the camera, the filter kernel size should be bigger than if they are far away (say 4x4 pixels): the filter kernel size should adapt to the depth of the texel we are currently blurring. We will describe the computation a bit further in this document.
Note that for performance sake, the bilateral filtering is done by applying first a filtering in the X dimension than another filtering in the Y dimension. That's not mathematically correct as bilateral filtering is not separable. Doing two passes produce some artifacts but they are normally not visible / acceptable and it would be too GPU perf consuming to do otherwise anyway, so it's a tradeoff we pay for better perf at the expense of quality.
The shader used to implement the bilateral filtering is bilateralBlur.fragment.glsl.
Reference 2 helped me a lot setuping correctly this shader (notably for adapting the kernel size to the particle project size).
Once we have a blurred depth texturing, we can render the fluid.
As explained above, the normal is computed thanks to the depth and the uv coordinates (more details in 1 - slides 17/18).
Here are the normals computed for the sample scene:
For fluid surfaces, it's important to implement the Fresnel effect, which is responsible for the higher reflections at grazing angles (again, see 1 - slide 39). Also, for a more accurate rendering, we use the Beer's law to compute the refraction color, that states that the light intensity in a volume decreases exponentially as the light goes through: I=exp(-kd)
with d
being the distance travelled by the light and k
the absorption factor (which depends on the volume color). For a basic rendering, we can use a fixed value for d
(the thickness of the volume) for the entire volume. You end up with something like:
The shader used to implement the final fluid rendering is renderFluid.fragment.glsl. Note that there is also a specific version for WebGPU (renderFluid.fragment.wgsl) because depth textures must be declared with a specific type (texture_depth_2d
) that can't be inferred automatically by the GLSL => WGSL converter (the depth texture we are talking about here is the depth buffer of the scene, not the depth texture of the particles! More on this later).
We can do better by computing a thickness for each texel (see next step), but if you are GPU limited it's a cheaper way to render fluid surfaces.
Instead of using a fixed thickness for the whole volume, we can try to compute it for each texel. This is achieved much like for the depth texture, but instead of generating a depth value for each point of the sphere (particle) we write a thickness value. To accumulate the values, we enable alpha blending (with the ADD function) and disable Z-testing so that all particles can contribute to the texture: even if a particle is not visible from the camera (because hidden behind other particles) it should still contribute to the thickness computation. Then, in the final rendering pass we use this texture to retrieve the thickness value used in the Beer's law equation.
Thickness texture:
As you can see, the resolution is low (the thickness texture is 256x256 in this screenshot). It's okay, because the thickness is a low frequency signal, the values are smooth and don't vary much from one point to the other. In any case, you should try to use a texture as small as possible for the thickness texture because it is quite fill-rate intensive. Also, as for the depth texture, we apply a blurring on this texture. This time, however, a standard 2D gaussian blur is enough, we don't need a bilateral filtering.
Blurred thickness texture:
Final rendering with a blurred thickness texture:
Et voilà! That's what you need to do to render a set of particles as a fluid surface.
This is an additional step which is not described in 1.
Instead of using a single color for the whole volume of fluid, it's possible to use a different color for each pixel by means of a diffuse texture. This texture is generated by rendering the particles as spheres (as for the depth/thickness texture) but using a vertex color associated to each particle to draw the sphere. So, the inputs are now a list of 3D coordinates + a list of colors.
For eg, here are two renderings of the "Dude" model as a fluid, with and without the diffuse texture:
Without the diffuse texture | With the diffuse texture |
---|---|
The generated diffuse texture:
The list of 3D coordinates for the dude has been generated thanks to the PointsCloudSystem class, which can generate points spread over a mesh and is also able to generate vertex colors for these points by sampling the diffuse texture of the material.
The shaders used to generate this texture are particleDiffuse.vertex.glsl / particleDiffuse.fragment.glsl.
The next section will delve more in-depth on some of the steps described and some others that I skipped over (as the inclusion of the fluid in an existing scene).
In the final rendering shader, we need to convert a view depth to NDC depth several times. Here's how to make the conversion.
The projection matrix used to transform points from view space to clip space in Babylon.js is (perspective projection):
So, the z coordinate in view space is transformed to clip space as:
Finally, the clip space z coordinate is transformed to NDC space by dividing by the clip space w coordinate:
That's the formula used in the rendering shader:
As explained above, we should use a kernel size for the bilateral blur so that the particles are blurred the same amount when the particles are near or far from the viewer.
To this end, we should make the kernel size dependent on the projected size (on the screen) of the particle: a particle A which projects to two times the number of pixels of particle B should have a kernel size two times bigger.
So, let's project the particle coordinate from view space to screen space (note that we are doing the computation on the Y coordinate and not X because the b
coefficient is a little simpler than the a
coefficient in the projection matrix):
So, the screen size of the projected particle is (d
being the particle diameter):
Note that everything can be precomputed, save for Zview
:
blurFilterSize
is the user parameter used to adjust the blurring and 0.05
is a scaling factor so that blurFilterSize = 10
is a good default value.
To compute the kernel size, the bilateral blur shader is simply doing:
The problem we want to tackle here is that the fluid should take into account the existing objects of the scene and be obstructed when objects are in front of it.
So, in effect, we need to take into account the scene depth buffer when generating the fluid depth and thickness textures. Actually, we only need to add this support when generating the thickness texture because where the fluid is occluded the value in this texture will be 0, which is tested in the fluid rendering shader:
#ifndef FLUIDRENDERING_FIXED_THICKNESS
if (depth >= cameraFar || depth <= 0. || thickness <= minimumThickness) {
#else
if (depth >= cameraFar || depth <= 0. || bgDepth <= depthNonLinear) {
#endif
vec3 backColor = texture2D(textureSampler, texCoord).rgb;
glFragColor = vec4(backColor, 1.);
return;
}
As you can see, we test that thickness
is at least equal to minimumThickness
to avoid returning early: that's to be able to skip isolated fluid particles that are too thin (minimumThickness
is a parameter that the user can tune).
You can also see we handle a second case where thickness is fixed (so FLUIDRENDERING_FIXED_THICKNESS is defined) and we don't use a thickness texture. In that case, we rely on a test between the depth value from the scene depth buffer (bgDepth
) and the depth of the fluid (depthNonLinear
). We could have also used this test in the thickness texture case, but doing as we did is likely faster because we limit the fill rate of the thickness texture where the fluid is occluded, and we also avoid a lookup in an additional texture (the scene depth buffer).
For this scene:
Here's what you get for the thickness texture without and with the scene depth buffer handling:
Without scene depth buffer handling | With scene depth buffer handling |
---|---|
So, how to generate and use the scene depth buffer when generating the thickness texture?
We need to understand how post processes are working in Babylon.js to be able to generate this buffer and use it in the thickness texture generation.
A post process is simply a render target texture (RTT) + a shader. The shader is run over the texture and the result is generated into another texture.
The tricky part here is to understand that the texture the post process writes to is the RTT of the next post process in the chain!
So, for the ith post process in a post process chain of n
post processes:
- its RTT is filled by the post process at step
i-1
- it runs its shader over its RTT and outputs the result in the RTT of the post process at step
i+1
.
Now, you can see there's a problem for the first and last post process because i-1
and i+1
are not defined, respectively.
These special cases are handled like this:
- the RTT of the first post process is filled with the scene rendering. It means that when there's at least one post process, instead of rendering the scene into the default frame buffer (ie. the screen), the system renders the scene into the RTT of the first post process of the chain
- the last post process does its rendering into the default frame buffer (ie. the screen)
That's all we need to know to be able to retrieve the scene depth buffer.
As explained above, the first post process RTT is used to render the scene. So, if we enable generating the depth texture on this RTT we can retrieve it and reuse it later when generating the thickness texture.
It is done like this:
const firstPostProcess = camera._getFirstPostProcess();
if (!firstPostProcess) {
return;
}
firstPostProcess.onSizeChangedObservable.add(() => {
if (!firstPostProcess.inputTexture.depthStencilTexture) {
firstPostProcess.inputTexture.createDepthStencilTexture(
0,
true,
this._engine.isStencilEnable,
);
}
});
We do it in the onSizeChangedObservable
event because that RTT can be recreated if you resize your browser, for eg. This event is called after firstPostProcess.inputTexture
is recreated, so it's safe to call createDepthStencilTexture
at that time.
Using the depth texture generated this way when generating the thickness texture is simply a matter of doing:
firstPostProcess.inputTexture._shareDepth(thicknessRT);
where thicknessRT
is the RenderTargetWrapper
used to generate the thickness texture.
However, there's one subtlety here: we allow the user to use a different size for the thickness texture than for the scene! So, it's possible that thicknessRT
has not the same width/height than the firstPostProcess.inputTexture
RTT. In that case, you will end up with an error when trying to use the depth buffer of the scene with thicknessRT
because it's not allowed to use different sizes for the depth texture and the regular output render texture.
So, we must resize the scene depth texture so that it fits the dimensions of thicknessRT
before being able to use it. There's a small helper class that does this (CopyDepthTexture) as it's not totally straightforward because you are not allowed to simply perform a texture copy: copying to depth textures is not allowed (not in WebGL nor in WebGPU). So, you must issue a texture rendering and set gl_FragDepth
with the source depth texture value (reading from a depth texture is allowed in a shader) to write to the (correctly sized) output depth texture.
Note that we need to use two different shaders for WebGL and WebGPU because in WebGPU depth textures have a special type (texture_depth_2d
) that the GLSL => WGSL converter can't infer automatically:
This is just an experiment (not very conclusive at the time).
When enabled, the velocity of the particle is written to the depth texture along with the depth (depth is in the Red component, (magnitude of) velocity in the G component). It is blurred in the same way than the depth by the bilateral blur.
In the final rendering shader it is simply used to smoothly fade the color to white when the velocity is above a given (hard-coded) limit:
vec2 depthVel = texture2D(depthSampler, texCoord).rg;
...
#ifdef FLUIDRENDERING_VELOCITY
float velocity = depthVel.g;
finalColor = mix(finalColor, vec3(1.0), smoothstep(0.3, 1.0, velocity / 6.0));
#endif
The fluid renderer has been implemented as a scene component (like the depth renderer, for eg), so you can enable it by doing scene.enableFluidRenderer()
(and do fluidRenderer = scene.enableFluidRenderer()
to retrieve a pointer on the renderer) and disable it with scene.disableFluidRenderer()
.
The fluid renderer deals with two entities: FluidRenderingObject
and FluidRenderingTargetRenderer
. FluidRenderingObject
is an object you want to render as a fluid: either a particle system (FluidRenderingObjectParticleSystem
) or a raw list of particles (vertex coordinates - FluidRenderingObjectVertexBuffer
). FluidRenderingTargetRenderer
is the class that generates the depth / thickness / diffuse textures and renders the object(s) as a fluid. You can associate multiple FluidRenderingObject
to a single FluidRenderingTargetRenderer
or have multiple FluidRenderingTargetRenderer
instances and dispatch your fluid objects among those target renderers. For performance sake it's better to use a single target renderer for multiple objects than creating a new target renderer instance for each object, but if the parameters of the target renderer must be different depending on the object to render you have no choice than creating additional target renderers.
The FluidRendererGUI
class lets you see the fluid objects and target renderers registered with the fluid renderer and allow changing their properties easily:
Regarding the particle systems, there's a new renderAsFluid
property on the ParticleSystem
class, so enabling the fluid renderer for a particle system is simply a matter of doing particleSys.renderAsFluid = true
. The FluidRenderingObject
and FluidRenderingTargetRenderer
instances created under the hood when setting renderAsFluid = true
can be retrieved by doing:
const renderObject = fluidRenderer.getRenderObjectFromParticleSystem(particleSys);
const fluidObject = renderObject.object;
const targetRenderer = renderObject.targetRenderer;
if you want to change their properties afterward. Note you can also create a fluid object for a particle system by calling fluidRenderer.addParticleSystem()
.
To create a fluid object for a list of particles, use fluidRenderer.addVertexBuffer()
.
- Screen Space Fluid Rendering for Games by Simon Green
- Implementation of the i3D2018 paper "A Narrow-Range Filter for Screen-Space Fluid Rendering" by Nghia Truong. The bilateral filtering is implemented in filter-bigaussian.fs.glsl