Thursday, September 18, 2014

Superdimensional rendering - overlapping cameras in Unity



I got couple of questions how my LD30 entry "Superdimensional rendering was done. The game was made in Unity engine and the actual dimension beams should work with Unity free. I got couple of questions from co-workers and from forums asking how the rendering was set up. This is a short description. I'm assuming you know a bit of vector math and know how to generate custom meshes. 


Portal beam geometry construction

 

Portal geometry is constructed only from objects that are marked to be beam blocker objects. When the game starts, all these blockers and their geometry infromation is stored in a list. After that the light mesh loops through the list and builds new geometry using the method below. 


Each frame every blocker meshes' every edge is looped. For each edge we extrude a new edge away from the mouse position. The geometry has to pan far enough to cover screen. In theory it could extrude all the way to, or close to infinity. I had some issues with the code during the jam when I set it to infinity so I just set it to ”far enough” instead. For each edge you end up creating 4 more vertices and 2 new triangles.
When you do that to all blocker objects' edges the result looks something like this:


Beam rendering & camera setup

 

The game scene had total of 4 cameras. There was one camera for each game world. One for snow, another for water and one more for lava. These world cameras were set to only render objects that were on their worlds. So winter world objects were layered and its camera was set to only render those layers. All the cameras also had to set their culling flag to only clear depth (Clear flags: ”Depth only”).

The cameras were also set to render in particular order. In my case the water world was rendered first. This was done by setting its camera's ”depth” parameter to -0.5. Next lava camera rendered using depth of -0.4 and winter world's camera used depth of -0.3. These values are completely arbitary but the idea is to go from lowest depth to highest. In addition to these there was a final camera which just took the results from the other cameras and added some post process to their pixels. It had its depth value set to 0 to render it last.

The camera set up looks like when you look at them separately:


Now the cameras render in correct order but we need a way to mask each cameras visibility. This was done using shader from over here: http://wiki.unity3d.com/index.php?title=DepthMask That shader simply renders no new pixels (renders ”nothing”) on screen but fills the depth buffer. This shader was used on light beam's mesh. I had 2 beam meshes, 1 for lava world (rendered second) and 1 for winter world (rendered 3rd). Water world didn't need geometry because it was rendered first and should always fill whole screen. In the final step there was a main camera that just rendered some post processing effects and output the result on player's screen. 


And that was basically it. It sounds and looks more complicated than it really is. This technique could easily be used for 2D game shadows or as line of sight mesh. Just use one camera and set the beam material to black and you're done. 

Hopefully someone finds this useful. 

No comments:

Post a Comment