Jaime Borondo

Video Game Engine / Generalist Programmer

Experienced Unity and C++ Video Game Developer, always open to interesting offers.

Filtering by Tag: Graphics

"Real-time" Water Caustics

In this post I will try to explain in a bit more detail what my final project for CS562 (Advanced Rendering Techniques) consisted of as well as point out some disadvantages of using such an approach over faking caustics using animated textures.

I'll get straight to the point here. You see the quotation marks around Real-time in the title? The reason I added them is because the definition of real time is fuzzy at best. I could easily claim I was going for a "cinematic experience" and that it was by design that this technique limited the simulation to 30 fps, but it would be a blatant lie. The fact is that due to the algorithm, the geometry that is used to generate the caustics (refractive geometry) needs to have an absurdly high vertex count if decent results are desired. So if you are looking for an efficient implementation, I suggest you look elsewhere. If on the other hand, you wish to attempt to improve the algorithm, or are just generally curious as to how it was done, please read on.

I implemented this feature following this white paper(Caustics Mapping: An Image-space Technique for Real-time Caustics). While this is definitely not the most technically impressive method for caustics generation, the reason I didn't go with a more complex approach was due to the time limit for the implementation being shorter than I would have liked.

In broad strokes the way this algorithm works is as follows. Before doing anything else, we separate the scene in at least two groups; Translucent objects that cause refraction and therefore influence caustics generation, and Opaque objects that "receive" the caustics that were generated.

We start from (place a camera at) the light that is causing the caustics and looking at a translucent object.

Here we need to store two sets of data to use later:

  • Position Texture for Receivers (if you are familiar with Deferred shading this should be trivial)
  • Position and Normal Texture for Refractive Geometry

Next comes the meaty part of the algorithm

  • Compute the resulting refraction from the refractive geometry per-vertex. (we have the light direction (camera's view vector), and we stored the normals in the previous pass)
  • With this refraction direction and the vertex that generated it, we estimate the intersection with the receiver geometry.
Taken from the presentation linked at the bottom of this post

Taken from the presentation linked at the bottom of this post

In my experience, no refinement was needed past the first iteration. Once we have this point calculated this will be the output from the vertex shader (this is known as vertex splatting).

For the fragment shader, we want to accumulate the contribution of all rays that end up on this fragment. To do this, each ray contributes proportionally to how much of the refractive geometry is visible (occlusion query,  OpenGL has had this feature since v1.5) and we need to take into account the absorption coefficient of the fluid the light is travelling through as well as how much it actually travelled. As such, the actual contribution per ray is : 

c_vshader = (1.0f / occlusionresult) * dot (refractive normal, light direction);

c_fshader = c_vshader / e^(coeff_absorption * distance_travelled)

You can later combine this computation with a blur of the caustics texture generated, or experiment splatting circular textures instead of just points, the former certainly gave me better results.

Once you have the caustics texture it is up to you how you choose to modulate the output and use that texture. What I went with was modulating the diffuse and specular contributions in the final output. And I am relatively happy with the result given the tradeoffs that were necessary.

Here is a link to the presentation I made in class where some things might be better explained, as well as some screenshots of how the caustics texture should look and how it looks when applied to geometry.