Jaime Borondo

Video Game Engine / Generalist Programmer

Experienced Unity and C++ Video Game Developer, always open to interesting offers.

Work and Life Update

I haven't been keeping this page updated at all, and it's high time I did some housekeeping and cleanup around here. The main reason for this is that I've been fairly busy these last few months. I recently got a job as a Graduate Programmer at Pixel Toys, a small videogame company in the United Kingdom.

I joined the "Drop Dead" project (a virtual reality game already released for GearVR) as part of the team tasked with porting the game to the Oculus Rift. When I joined, development was already at a very advanced stage, with most major features already implemented, but I did eventually get to contribute a few new features/reworks of existing ones. Now, a couple months after I joined, the game's Rift version has just been officially announced, and with an all-new Rift-exclusive game mode.

The game is an on-rails FPS zombie game, reminiscent of games like Time Crisis. It can be controlled using the Oculus Remote, an Xbox One controller, or using the Oculus Touch Controllers. Using the latter is arguably the best way to experience the game. The team put so much effort into making the controls feel intuitive and consistent, and there are a few features of the game that you can anly make full use of if you are playing with touch controllers.

My first week of development was a bit rough, as I had to juggle the newfound responsibilities at work with getting used to living in a foreign country. However I soon found myself getting used to my new schedule and actively contributing to the game's codebase. 

The work is very rewarding, and after just a day of installing and setting up the software I would need, I started getting my first tasks assigned. Since I joined at the end of the product's development lifecycle, most of the things I was tasked to do were bugfixes, but that didn't bother me one bit, it's a wonderfully effective way of getting to know the codebase you are working with, its advantages and its limitations.

Working on this game was my very first experience with virtual reality, and I was honestly amazed at the possibilities it opens up for gaming. It allows for an entirely new level of immersion and interaction. So much so that I often found myself trying to duck incoming projectiles, or flinching when I was being attacked by zombies on every side.

 Make sure to check the game out when it releases this Spring if you have a VR ready rig or a GearVR, and feel free to leave any feedback you have here or through Twitter or on Facebook 

As an unrelated side note,  I often find myself looking back fondly at some of the projects I developed as a student. I would be lying if I said that I wouldn't want to have a second attempt at improving them, namely the physically based renderer, trying to bring it up to date with more modern techniques. This would also allow me to not lose touch with C++, as well as general large scale application architecture, so you might see some updates regarding that soon(ish).

"Real-time" Water Caustics

In this post I will try to explain in a bit more detail what my final project for CS562 (Advanced Rendering Techniques) consisted of as well as point out some disadvantages of using such an approach over faking caustics using animated textures.

I'll get straight to the point here. You see the quotation marks around Real-time in the title? The reason I added them is because the definition of real time is fuzzy at best. I could easily claim I was going for a "cinematic experience" and that it was by design that this technique limited the simulation to 30 fps, but it would be a blatant lie. The fact is that due to the algorithm, the geometry that is used to generate the caustics (refractive geometry) needs to have an absurdly high vertex count if decent results are desired. So if you are looking for an efficient implementation, I suggest you look elsewhere. If on the other hand, you wish to attempt to improve the algorithm, or are just generally curious as to how it was done, please read on.

I implemented this feature following this white paper(Caustics Mapping: An Image-space Technique for Real-time Caustics). While this is definitely not the most technically impressive method for caustics generation, the reason I didn't go with a more complex approach was due to the time limit for the implementation being shorter than I would have liked.

In broad strokes the way this algorithm works is as follows. Before doing anything else, we separate the scene in at least two groups; Translucent objects that cause refraction and therefore influence caustics generation, and Opaque objects that "receive" the caustics that were generated.

We start from (place a camera at) the light that is causing the caustics and looking at a translucent object.

Here we need to store two sets of data to use later:

  • Position Texture for Receivers (if you are familiar with Deferred shading this should be trivial)
  • Position and Normal Texture for Refractive Geometry

Next comes the meaty part of the algorithm

  • Compute the resulting refraction from the refractive geometry per-vertex. (we have the light direction (camera's view vector), and we stored the normals in the previous pass)
  • With this refraction direction and the vertex that generated it, we estimate the intersection with the receiver geometry.
Taken from the presentation linked at the bottom of this post

Taken from the presentation linked at the bottom of this post

In my experience, no refinement was needed past the first iteration. Once we have this point calculated this will be the output from the vertex shader (this is known as vertex splatting).

For the fragment shader, we want to accumulate the contribution of all rays that end up on this fragment. To do this, each ray contributes proportionally to how much of the refractive geometry is visible (occlusion query,  OpenGL has had this feature since v1.5) and we need to take into account the absorption coefficient of the fluid the light is travelling through as well as how much it actually travelled. As such, the actual contribution per ray is : 

c_vshader = (1.0f / occlusionresult) * dot (refractive normal, light direction);

c_fshader = c_vshader / e^(coeff_absorption * distance_travelled)

You can later combine this computation with a blur of the caustics texture generated, or experiment splatting circular textures instead of just points, the former certainly gave me better results.

Once you have the caustics texture it is up to you how you choose to modulate the output and use that texture. What I went with was modulating the diffuse and specular contributions in the final output. And I am relatively happy with the result given the tradeoffs that were necessary.

Here is a link to the presentation I made in class where some things might be better explained, as well as some screenshots of how the caustics texture should look and how it looks when applied to geometry.