Sandy Snow Globe – Deferred Shading

Featured

 

For the Real-Time Graphics module as part of my MSc in Computer Science we were tasked with developing a real-time graphics application representing a snow globe but with a few added twists. Instead of a wintery landscape, the theme would be desert with specific requirements including a day/night cycle, seasonal effects, shadow mapping and particle systems. Additional marks would be awarded for various advanced features, the highest being deferred shading. Having always wanted to try my hand at implementing it I went about researching the topic.

I implemented the project using my own engine I have been developing during my MSc written in C++ and utilising DirectX 11. The snow globe features deferred shading, particle systems, blending, PCF filtered shadow mapping, normal bump mapping, height mapping and environment mapping. The Snow Globe has a simple day/night cycle via two orbiting directional lights (Sun and Moon) and alternating summer/winter seasons. Summer nights = fireflies, winter nights = snow. Each firefly has a point light and using ‘deferred shading’, significant numbers of lights can be processed while maintaining good performance.

‘Deferred shading’, particular for non 3D programming experts, can be a rather tricky concept to grasp fully and so please find below my own attempt at describing what deferred shading is and why its a really cool technique.

Deferred Shading: Overview

‘Deferred Shading’ is a multi-pass rendering technique that has the distinct advantage of deferring the scene lighting to a second pass meaning put simply the calculation becomes one of a 2D domain rather then 3D. Usually with standard forward rendering, lighting is calculated in the pixel shader for every interpolated fragment after processing in the vertex shader. This means that every geometric object in your scene will be required to perform the lighting calculations which in ‘Big O’ notation looks like O(lights * meshes). The wonderful thing about deferred shading is that by using just one extra pass we can reduce that to O(lights + meshes) or to look at it another way in terms of fragments, we can reduce it from O(lights * geometryFragments) down to O(lights * screenFragments).

Deferred Shading - Sandy Snow Globe

Deferred Shading – Sandy Snow Globe

 

This has massive implications for performance. With forward rendering, more than half dozen or so light sources is enough to seriously impact performance, though modern games generally get away with this number by limiting how many are visible at a time. Deferred shading however as demonstrated in the above video can handle many times that amount of lights simultaneously with little performance impact. For the coursework I demonstrated a scene with 100 point lights which although pushed the GPU a little, still ran comfortably at over 30 FPS.

There are multiple deferred rendering techniques with ‘deferred shading’ being a 2 pass solution unlike ‘deferred lighting’ which introduces a third pass. Basically, for systems with lower GPU memory such as old-gen console hardware, ‘deferred lighting’ is preferable since it allows the size of the ‘G-buffer’ to be smaller because of the extra pass. Deferred shading is a simpler and more elegant solution but does require a larger ‘G-buffer’ and hence is better suited for ‘beefier’ GPU hardware.

DesertGlobe2

How it Works?

Deferred shading works as described using two separate rendering passes. The first pass is called the ‘geometry pass’ and works similar to a normal pixel shader carried out in forward rendering, except instead of outputting to the back buffer, we output to a selection of render targets, collectively referred to as the ‘G-buffer’. Each render target stores specific scene information so that once fed into the second ‘lighting pass’ the correct lighting calculations can be performed. Exactly what information you store in the ‘G-Buffer’ is fairly flexible although at a minimum you will require 3 buffers for colour data, normal data and preferably depth information. I say preferably because you could instead choose to store the 3D world position but this results in storing superfluous information since by using just the depth information we can reconstruct the 3D world position for each screen pixel later at a much cheaper memory cost (1 vs 3 floats per pixel).

As a bonus, when it comes to the ‘lighting pass’, you can further enhance performance by computing lighting on only the pixels that are effected by a particular light, by representing the light as a basic primitive based on its type. A full-screen quad for a directional light, a sphere for a point light and a cone for a spotlight.

DesertGlobe3

What’s the Catch?

Blending:

This brings us to the added complexity of deferred rendering. Because we effectively flatten the scene into 2D inside our buffers, we lose the depth information from the scene, meaning when it comes to blending operations such as those used in transparency, it’s hard to know in which order the scene should be arranged. There are however a few solutions to this including manually depth sorting your geometry and rendering in a ‘painters algorithm’ fashion, or even simpler, rendering your transparent objects in a separate forward rendering pass and blending, which is how I achieved the transparent snow globe.

Materials:

Because every object is encoded inside our ‘G-buffers’, any info about the scene that isn’t in there, the lighting pass will simply not know about. This presents a problem for geometric material properties because normally these would be passed into the shaders on a ‘per object’ basis via constant-buffers (DirectX), but because our ‘lighting-pass’ will only run ‘per light’ and not ‘per object’ we have no way of assigning the required material to the objects. One simple solution to this is to use a material ID value and throw this into one of the existing buffers like the colour buffer and then define a material array inside the lighting shader utilising the ID as an index.

Overall I’d implement deferred shading for any project in the future where time is not a concern as it does slightly complicate things but the benefits more than make up for this. If your game or 3D program doesn’t need more than a few lights then its not something that is strictly necessary however many modern games are already using deferred rendering techniques to enhance scene lighting. I’d also say if you can successfully implement deferred shading and understand the technique then you have gotten to grips with one of the more advanced multi-pass rendering techniques and this brings with it an enhanced understanding of the graphics pipeline.

 

The Column: 3D Graphics Simulation

Featured

As the single fully weighted piece of work for the 3D Graphics module during my second year of my Computer Science degree at Hull University I had to create an OpenGL graphics simulation. Despite having had little prior experience of using 3D graphics frameworks, I am very pleased with the outcome and look forward to continuing to spend a lot more time with both the OpenGL and DirectX API’s; in particular my final year project looks to be a ray-tracing renderer (potentially CUDA) which should give me additional exposure to what is becoming a more and more promising technology for gaming.

I created a report accompanying the finished program which I’ll simply include bits of below to explain the project and how the simulation works.

The Column

The Column

The Column is a 3D graphics simulation designed around a series of stacked boxes containing cylinders. Balls are emitted at the top of the stack and interact with both the geometry and each other via way of collisions and response. In addition, the simulation features a “Sphere of Doom”, a large sphere near the bottom of the stack that absorbs balls, shrinking their size and mass. A portal lies at the bottom of the stack that transports any balls that enter, back to the top of the column. The entire simulation is made using OpenTK (OpenGL) in C#. All geometry and physics are rendered mathematically.

The specification determined that one emitter should emit balls with the approx density of aluminium, the second one, copper and the third, gold.

The program simulates a dynamic system through various means. The balls use an Euler integration method with a gravitational constant that combined with calculated velocity, mass and density of each ball, simulates the motion of the balls falling down the column.

Ball to ball collision response is handled via “elastic collisions” based on the mass of balls and perpendicular velocities from the collision point, thus a heavier ball will knock a lighter ball out of the way. Additionally the angle of impact effects the amount of force transferred.

Rendering is performed via OpenGL using version 3.1 and Vertex Buffer objects. All primitive 3D models have been constructed manually or mathematically. I use GLSL vertex and fragment shaders for “Phong Shading” based ambient, diffuse and specular lighting calculations that provide interpolated lighting of geometry between vertices. My scene uses 3 point light sources and has built in support for both directional and spot lights if desired.

I have implemented a particle system object that emits particles of a given shape. I have used simple quad planes for the simulation for performance optimisation and rotate them for added effect combined with the lighting. The particles are highly customisable in lifetime, movement, scale and quantity and can be added for any desired event. I use them specifically for collisions with the Sphere of Doom and upon spawning of balls from emitters.

My portals use a Frame Buffer Object which renders the scene from the desired camera position to a texture. I then switch to the Display Frame Buffer and render the entrance and exit portals using the respective textures to give the effect of seeing through the portals to their destination, which in turn is updated in real-time.

Bottom-Up

I have spent considerable time optimising the simulation to maximise the overall frame rate. Much of this has been achieved by streamlining the shader structure to avoid dynamic branching, specifically with the avoidance of “IF” statements , the use of step functions and moving as many calculations as possible to the vertex shader. The fragment lighting calculations are easily the most intensive part of the simulation and reducing my lights to a maximum of 3 per fragment has also helped greatly.

With a simulation such as this, there is always something that could be improved on, tweaked, optimised or added. Suffice to say I am very satisfied however with the quality of the finished product which has more than surpassed my initial expectations and I feel I have learned very useful and contemporary skills that will be essential for the future. Perhaps most importantly, I have thoroughly enjoyed the assignment.

I’ll get a video uploaded of it in motion at some point. I’m currently looking at improving my portals a bit by potentially using an asymmetric frustrum.

Sphere of Doom