The MSc Departmental Prize

bcs

Several weeks ago I was very pleasantly surprised by a letter from the British Computer Society stating I had won the ‘Departmental Masters’ Prize in Computer Science’ for my MSc degree!

Along with the nice certificate, I also received a cheque for £150 and a years membership to the BCS, making it most certainly the best bit of mail I’ve received through the letterbox for a good while.

BCS certificate

From speaking with the department I believe I was awarded this based on attaining the highest average score and so it’s nice to know that my effort along with the distinction, was doubly worth while. I’m already looking back fondly at my time at Hull uni and choosing to go on and do the MSc was definitely worth it.

On other fronts, I recently posted a video of my MSc dissertation project on my YouTube profile for anyone interested in the subject of procedural content generation. Making the video was a bit of a pain, and anyone who has recorded footage showcasing academic work previously will know where I’m coming from here, in the sense of making it both informative AND interesting. In the end I edited some raw footage of me exploring a generated world and showcased the various features the best I could.

When I get chance, I’ll be adding it to this site to complete my uni portfolio and uploading my dissertation report (basically because a) it took ages to write!  b) the possibility someone might find it useful and maybe even interesting!).

 

 

Update and MSc Results

It’s been 8 months since my last post and since then I have become a dad, completed my masters, relocated and become a programmer in the games industry! So, there’s been much to talk about but little time to do it. It’s been a crazy year.

Work is keeping me extremely busy, as is family life, so with what little time I do get I tend to try and keep my hand in with gaming. Having said this, I have a large backlog of games to play through including Fallout 4 which I’ve yet to even install. Due to all the above, this blogs been a little abandoned, though it’s served a very useful purpose of helping to display my portfolio and get me a job, something I’d strongly recommend any aspiring game developer to do. I’ll endeavor to post more now I have my weekends back and hopefully useful things and not just…stuff? Hopefully I’ll be getting back into some hobby programming projects I’m wanting to do such as some WebGL ray tracing stuff for this site, and I’m sure I can put some good tutorials together that will benefit all.

So, university then. It’s over. Done. 4 years of very hard toil and the question is was it worth it? A resounding YES, is the answer of course. I’m lucky now that I’m in a position where 4 years ago I was hoping to be, building invaluable experience working in industry.

The University of Hull has provided an excellent place of learning over the course of my BSc and MSc and importantly opened the doors needed to get me into what is a highly competitive industry. I’d like to thank all of the lectures, supervisors and staff that I’ve worked with over the years who made it a very positive experience. A good university is of course important in determining how much you take away from your time studying, but I will say that THE most important thing is your determination and self-motivation. You can coast through a CompSci degree, and take very little from it. Hopefully my grades demonstrate the fact I put my all into it, and at times, particularly in the MSc, the work load was intense. Intense like driving home at dawn from the lab having done 16hrs of red bull and vending machine fueled programming, knowing you need to get back to the lab in a few hours to do it all over again.

MSc Results:

Here are the results as per the University’s module results site:

With an overall average of 89.9%, this means I should be comfortably in the distinction category for my masters which I am thrilled about.

The big module for the MSc is the dissertation project and having done a pure graphics project for my BSc in CUDA ray tracing, I decided to suggest my own topic this time around and decided upon procedural content generation in RPG’s, a subject I have long been fascinated with. The scope of the project was massive, including the design and implementation of my own 3D DirectX11 engine and the creation of an explorable procedurally generated world with procedurally generated dungeons. Needless to say, the process nearly killed me and the report writing was also tough going since I was moving house with a new born and also working full-time! Considering all this, I achieved a great deal of what I had set out to do, as well as surprising my two supervisors quite considerably when they saw just how much I managed to get done!

I’ll be aiming to make a separate post regarding the dissertation project soon, as well as putting together some sort of video of it to complete my degree portfolio.

The project I’m currently working on at work is really exciting and I wish I could talk about it, but unfortunately I can’t…yet. I can say that since starting work I’ve done some business orientated Objective-C, worked with Unity and on my current project, I’m working on a very large mixed code-base of mainly C with bits of C++. Lets just say I’m glad I took note of all that hex, bit masking and bit-wise operations you can easily not pay attention to at Uni, despite being very much absent in more modern managed languages and coding styles.

 

 

 

 

Sandy Snow Globe – Deferred Shading

Featured

 

For the Real-Time Graphics module as part of my MSc in Computer Science we were tasked with developing a real-time graphics application representing a snow globe but with a few added twists. Instead of a wintery landscape, the theme would be desert with specific requirements including a day/night cycle, seasonal effects, shadow mapping and particle systems. Additional marks would be awarded for various advanced features, the highest being deferred shading. Having always wanted to try my hand at implementing it I went about researching the topic.

I implemented the project using my own engine I have been developing during my MSc written in C++ and utilising DirectX 11. The snow globe features deferred shading, particle systems, blending, PCF filtered shadow mapping, normal bump mapping, height mapping and environment mapping. The Snow Globe has a simple day/night cycle via two orbiting directional lights (Sun and Moon) and alternating summer/winter seasons. Summer nights = fireflies, winter nights = snow. Each firefly has a point light and using ‘deferred shading’, significant numbers of lights can be processed while maintaining good performance.

‘Deferred shading’, particular for non 3D programming experts, can be a rather tricky concept to grasp fully and so please find below my own attempt at describing what deferred shading is and why its a really cool technique.

Deferred Shading: Overview

‘Deferred Shading’ is a multi-pass rendering technique that has the distinct advantage of deferring the scene lighting to a second pass meaning put simply the calculation becomes one of a 2D domain rather then 3D. Usually with standard forward rendering, lighting is calculated in the pixel shader for every interpolated fragment after processing in the vertex shader. This means that every geometric object in your scene will be required to perform the lighting calculations which in ‘Big O’ notation looks like O(lights * meshes). The wonderful thing about deferred shading is that by using just one extra pass we can reduce that to O(lights + meshes) or to look at it another way in terms of fragments, we can reduce it from O(lights * geometryFragments) down to O(lights * screenFragments).

Deferred Shading - Sandy Snow Globe

Deferred Shading – Sandy Snow Globe

 

This has massive implications for performance. With forward rendering, more than half dozen or so light sources is enough to seriously impact performance, though modern games generally get away with this number by limiting how many are visible at a time. Deferred shading however as demonstrated in the above video can handle many times that amount of lights simultaneously with little performance impact. For the coursework I demonstrated a scene with 100 point lights which although pushed the GPU a little, still ran comfortably at over 30 FPS.

There are multiple deferred rendering techniques with ‘deferred shading’ being a 2 pass solution unlike ‘deferred lighting’ which introduces a third pass. Basically, for systems with lower GPU memory such as old-gen console hardware, ‘deferred lighting’ is preferable since it allows the size of the ‘G-buffer’ to be smaller because of the extra pass. Deferred shading is a simpler and more elegant solution but does require a larger ‘G-buffer’ and hence is better suited for ‘beefier’ GPU hardware.

DesertGlobe2

How it Works?

Deferred shading works as described using two separate rendering passes. The first pass is called the ‘geometry pass’ and works similar to a normal pixel shader carried out in forward rendering, except instead of outputting to the back buffer, we output to a selection of render targets, collectively referred to as the ‘G-buffer’. Each render target stores specific scene information so that once fed into the second ‘lighting pass’ the correct lighting calculations can be performed. Exactly what information you store in the ‘G-Buffer’ is fairly flexible although at a minimum you will require 3 buffers for colour data, normal data and preferably depth information. I say preferably because you could instead choose to store the 3D world position but this results in storing superfluous information since by using just the depth information we can reconstruct the 3D world position for each screen pixel later at a much cheaper memory cost (1 vs 3 floats per pixel).

As a bonus, when it comes to the ‘lighting pass’, you can further enhance performance by computing lighting on only the pixels that are effected by a particular light, by representing the light as a basic primitive based on its type. A full-screen quad for a directional light, a sphere for a point light and a cone for a spotlight.

DesertGlobe3

What’s the Catch?

Blending:

This brings us to the added complexity of deferred rendering. Because we effectively flatten the scene into 2D inside our buffers, we lose the depth information from the scene, meaning when it comes to blending operations such as those used in transparency, it’s hard to know in which order the scene should be arranged. There are however a few solutions to this including manually depth sorting your geometry and rendering in a ‘painters algorithm’ fashion, or even simpler, rendering your transparent objects in a separate forward rendering pass and blending, which is how I achieved the transparent snow globe.

Materials:

Because every object is encoded inside our ‘G-buffers’, any info about the scene that isn’t in there, the lighting pass will simply not know about. This presents a problem for geometric material properties because normally these would be passed into the shaders on a ‘per object’ basis via constant-buffers (DirectX), but because our ‘lighting-pass’ will only run ‘per light’ and not ‘per object’ we have no way of assigning the required material to the objects. One simple solution to this is to use a material ID value and throw this into one of the existing buffers like the colour buffer and then define a material array inside the lighting shader utilising the ID as an index.

Overall I’d implement deferred shading for any project in the future where time is not a concern as it does slightly complicate things but the benefits more than make up for this. If your game or 3D program doesn’t need more than a few lights then its not something that is strictly necessary however many modern games are already using deferred rendering techniques to enhance scene lighting. I’d also say if you can successfully implement deferred shading and understand the technique then you have gotten to grips with one of the more advanced multi-pass rendering techniques and this brings with it an enhanced understanding of the graphics pipeline.

 

Cross-Platform Game Engine

Featured

In the first semester of my MSc Computer Science degree as part of the Games Development Architectures module we were tasked to design and implement a cross-platform game engine. A game would also be made using the engine.

The chosen platforms were a Windows PC and Windows Phone 8 device. I decided that considering Microsoft had developed a Universal Application framework for targeting both of these, I would utilise it. This was good from the point of view that it simplified the cross-platform compatibility, but introduced a few limitations (namely having to work with the Windows RT platform and resultant consequences for dealing with inputs via ‘ref classes’ etc.. Coming from experience with Win32 desktop programs, Windows RT feels very different to program for and much less flexible, but then again Win32 really does need some modernisation.

4

Project Details:

  • Engine coded in C++ (Visual Studio).
  • DirectX11 rendering engine component coded from scratch.
  • HLSL shaders.
  • The Universal App framework used to contain the code solution and deploy to both platforms.

We were given a design specification for a simple game called ‘Tunnel Terror’. It involved the player having to control a vehicle/object through a tunnel, avoiding various obstacles. The speed would gradually increase the longer the player survived and any collisions with obstacles would result in death. Score was determined by length of survival. I decided to add various extras including power ups such as coins and a randomised speed-up/slow-down. The game would need to play on both a PC and Windows Phone 8 device, allowing for the differing input controls to play. I decided the PC would utilise keyboard whereas the phone would rely on the accelerometer (tilt) sensor to manoeuvre the player through the tunnel. The PC also required a 2 player mode. Main menu, high score table and game over screens would be needed as well as Multiple camera modes such as first-person, third-person and death fly-by cameras.

3

Although marks were given for the game implementation and extra features, much of the module was graded based on the engine design, implementation and accompanying report. My report justified the design based on four principles of games architecture, namely ‘Simplicity’, ‘Reusability’, ‘Abstractness’ and ‘Modularity’. Below is an example of the UML design used for my engines platform independent rendering component, with examples given to how behaviour could be derived for both DirectX and OpenGL.

Renderer

In the report we also had to research how we would have implemented the game on next-generation architecture such as the PlayStation 4 and how the engine would deal with the addition of different kinds of input devices.

There were some marks awarded for graphics quality and since the target platforms were both Microsoft, DirectX11 was used for the graphics. I implemented normal bump mapping to give it a nice look when flying down the tunnel. I also randomly changed the textures of each tunnel section and reset them to the end of the sequence once passing behind the frustum to give the impression of an endless tunnel with non-repeating sections.2

Annoyingly because the game is a Windows Store application there is no runnable executable so without actually publishing it to the Store and getting past all the certification requirements I cannot put it up anywhere to play! What is worse though is that currently I know of no screen capture software that can even record footage of the game running (at a decent FPS), both Fraps and Bandicam do not capture it since it’s not a desktop application. Bandicam does have desktop capture support but this also didn’t seem able to see the game and is not suitable for high frame-rate applications. So, as it stands I can’t make a video of the game running without hardware recording. Hopefully, this is something that won’t always be the case.

I was very pleased with the final engine and received a 92% grade for the module. I have since improved upon it and reused design elements for subsequent modules such as Real-time Graphics. I think a lot of what I coded for this project will be extremely useful going forward.

 

 

 

Bit’s Blitz – Puzzle Game

Bit's Blitz - Puzzle Game

Bit’s Blitz – Puzzle Game

In the third year of my Computer Science BSc (2013) as part of the Commercial Games Development module, we were placed into groups and tasked to produce a computer themed game designed for children. Each of the group members had to produce a game design document, one of which would be chosen for the group to develop. My group consisted of me, Aaron Ridge, Michael Killingbeck, Andrew Woodrow, Joshua Twigg and Alex Lynch.

The group decided to go with my game design which was inspired by the classic puzzle game Chip’s Challenge, with the idea being to reimagine it and modernise the graphics.

Game synopsis:
 “‘Bit’s Blitz’ is a fun 2D puzzle game following the escapades of its protagonist ‘Bit’. The game takes place across a series of levels increasing gradually in difficulty, gradually introducing new game-play elements. The player controls ‘Bit’ around a grid, constrained by a series of maze-like blocks and hazards. ‘Bit’ must successfully collect all the computer components that are scattered around the level and then repair his computer to proceed to the next level.”

Details:
Developed using C# and the XNA framework for the PC platform (Windows XP+).

 

The nice thing about this game design was that we could focus on the puzzle aspect of the game, time and imagination permitting, due to the simple overhead on technical implementation. The tile-based game engine was written from scratch using XNA, utilising XML data structures to store level data and a custom made loader. A cool and free little program called Tiled was used to ‘paint’ the level layout and export it into our XML format. I’d strongly recommend this to any considering 2D tile-based games for constructing levels, having said that, it’s a nice programming exercise to develop your own editor if you get the chance.

All gameplay aspects including animations and particle systems were programmed for the game, using no other libraries except XNA. I designed the game framework based on the State Design Pattern which worked out really well and continue to use it for game development.

With the use of XML and Tiled it allowed us to churn out level designs at an alarming rate and the final product has over 20 levels! Not bad considering the 2 week development time. When giving the presentation of the game, we literally only had time to demonstrate about 5 of the best levels, odd considering level variety tends to be in short supply for prototypes.

Sound effects were added (free assets) however I’ve removed these from the video and added music since honestly, they weren’t brilliant! The above gameplay video demonstrates various levels (played by me). I could barely remember most of the levels so it’s pretty much a blind play-through with some genuine mistakes.

For the project we all chipped in and the group worked well together. The game was never released or published anywhere, though if anyone is interested I could stick the executable on here for download.

Meshless Real-time Ray Tracing Demo Video

Featured

alt test

Meshless Real-time Ray Tracer

I was recently asked to put together a video showcasing my ray tracing project for the University of Hull to show some of the new Computer Science students starting this September. As detailed in my last post, ray tracing was the subject of my third year dissertation project and I have since been extending the project into real-time using DirectX 11, endeavouring hopefully to continue it as part of my MSc by creating a rendering program that can be used to design and produce complex implicit ray marched geometry through a simple UI interface.

The video unfortunately had to be recorded at 640×480 resolution to maintain good FPS due to my aging laptop GPU (around 4 years old now!). As a result, I recommend not viewing it in full-screen to avoid scaling ‘fuzziness’.

 

 

Scene Loading:

Recently I have been working on a scene loading system for it in preparation for implementing a UI with the ability to save and load created scenes. I developed a scene scripting format that allows simple definition of the various distance functions that make up a scene, along with material types and lighting properties. The scene loader parses a scene file and then procedurally generates the HLSL distance field code that will be executed in the pixel shader to render the scene. I’ve used a similar looking format to POVRay’s scene files.

Below is an example of one of my scene files showing a simple scene with a single sphere and plane with a single light :

#Scene Test
 
light
{
     position <-1.5, 3, -4.5>
}
 
sphere
{
     radius 1
     position <-2,1,0>
}
material
{
     diffuse <1,0,0,0.25>
     specular <1,1,1,25> 
}
 
plane
{
     normal <0,1,0>
}
material
{
     diffuse <0.5,1,0.5,0.5>
     specular <1,1,1,99> 
}

More complex operations such as blending can be represented in the scene file as follows:

blend
{
    threshold 1
    sphere
    {
        radius 1
        position <-2,1,0> 
    }    
    torus
    {
        radius <1, 0.44>
        position <2,1,0> 
    }
}
 

Due to the recursive nature in which I have implemented the parsing, it also allows me to nest blending operations like the following series of blended spheres, resulting in a single complex surface:
 

blend
{
     threshold 1
     blend
     {
          threshold 1
          blend
          {
               threshold 1
               sphere
               {
                    radius 1
                    position <-2,1,0>
               }
               sphere
               {
                    radius 1
                    position <2,1,0>
               }
          }
          sphere
          {
               radius 1
               position <0,2,0>
          }
     }
     sphere
     {
          radius 1
          position <0,1,-2>
     }
}
material
{
     diffuse <1,0,1,0.25>
     specular <1,1,1,25> 
}

For more complex scene featuring blending, twisting and domain repetition, an example scene file looks like this:

#Scene Test
 
light
{
     position <-1.5, 3, -4.5>
}
 
repeatBegin
{
     frequency <8.1,0,8.1>
}
 
twistY
{
     magnitude 0.04
     box
     {
          dimensions <1,4,1>
          position <0,3,0>
     }
}
material
{
     diffuse <1,0.5,0,0.1>
     specular <1,1,1,5> 
}
 
sphere
{
     radius 2
     position <0,9,0>
}
material
{
     diffuse <0,0.5,1,0.5>
     specular <1,1,1,30> 
}
 
repeatEnd
 
plane
{
     normal <0,1,0>
}
material
{
     diffuse <0.2,0.2,0.2,0.5>
     specular <1,1,1,99> 
}

Currently my scene files support spheres, cubes, tori and also a ‘Blob’ shape which takes any number of component spheres as parameters and blends them together. It also supports custom blending of the above shapes, domain twisting and repetition operations. Materials can be specified with both diffuse and specular components, with the 4th diffuse tuple representing reflectivity, and the 4th specular tuple representing shininess.

 

As the project develops, I’ll need to implement a way of creating custom distance functions that aren’t just template primitive shapes, but defined more generally to allow users to create surfaces using anchor points This will likely be a main focus for my masters dissertation if I take this topic.

 

CUDA Ray Tracer – Dissertation Project

Featured

After on and off work for a year, and many thousand words later, my final year BSc dissertation project and report was completed. Can a ray tracer ever be truly ‘complete’? This post a brief description and summary of my project.

A download to my full dissertation report can be found below and as well as a few renderings from my prototypes:

Report:

578 Downloads

Prototype Renderings:

The project from a personal point of view was an important one. It was a period where I gained heightened interest in graphics programming, gaining an understanding of the principles of computer graphics, the mathematics involved and also the creative satisfaction that comes from it. When creating realistic virtual graphics from essentially nothing but code, maths and a display, on the face of it, it’s very easy to gloss over the ‘magic’ of it all, especially when you understand the complexity of how we actually perceive the Universe and the shortcuts that must be taken for computers to accurately mimic the natural phenomena of our brain’s visual perception.

 

CUDA Ray Tracer - Dissertation Project

CUDA Ray Tracer – Dissertation Project

 

A Bit of Biology and Philosophy:

The modern computer when you think of it, is really just a primitive extension of our own bodies, simple enough that we can manipulate, manage and understand it, with much greater control and predictability then our biology. They allow us to achieve things we could not otherwise do and many of the components inside a computer carry out very similar roles to organs found within us. Of course we can think of the CPU as a brain, but what else? Going into more detail, the GPU could be seen as a specialised part of the brain engineered to handle visual computation, just as our brain has it’s own visual cortex. A virtual camera in a rendering program replicates the capabilities of part of our eye, defining an aperture or lens through which to calculate rays of light, and like-wise, an ‘image plane’ positioned in front of the camera, carries out essentially the same functionality as our retina, but using pixels to make up the visual image of what we see.

When you understand the detailed steps required to render something in 3D, you realise that we are essential trying to recreate our own little simplified universe, it’s a pretty profound concept that when taken much further, manifests itself in popular science fiction such as the Matrix. After all, is mathematics not simply the ‘code’ of our Universe? It’s perhaps not as silly as it may sound, when you get down to the fundamentals of game developers creating virtual worlds, graphics programming being an essential component, and looking just how real and immersive these worlds are starting to become.

So What Is Ray Tracing?

Ray Tracing

Of all popular rendering techniques, it’s ray tracing that perhaps stands out the most in respect to my previous comments above. We all know roughly how and why we see, where light rays shine from a light source such as our Sun, they travel millions of miles to get to us and out of all the infinite number of rays, the tiniest percentage may find it’s way directly into our eye. This could be from directly looking at the Sun (not recommended!), and also from scattered or reflected light that has hit a surface, finding it’s way on a collision course with our eye.
This is fundamentally close to how ray tracing works, but with important differences. If a computer had to calculate the trajectory of all possible rays been fired out from a light source, this would be impossible with modern hardware, there are just too many potential rays, of which, only an infinitesimally small amount would ever find there way into the camera (eye) of the scene, and it’s only these rays we are interested in anyway. Instead, and referred to as ‘Backwards Ray Tracing’, light is fired from the camera (eye) into the scene and then traced backwards as it is reflected, refracted or simply absorbed by whatever material it hits. We then only have to fire a ray from the camera for each pixel in the image, which is still potentially a considerable number of rays (1920×1080 = 2073600 primary rays) and that’s without counting all the secondary rays as light scatters throughout the scene, but at least this reduced number is quite feasible.

Still, it is ray tracing’s close semblance to how light interacts with us in the real world that makes it a very elegant and simple algorithm for rendering images, allowing for what is known as ‘physically based rendering’, where light is simulated to create realistic looking scenes with mathematically accurate shadows, caustics and more advanced features such as ‘global illumination’, something that other faster and more common rendering techniques like rasterization (pipeline-based) cannot do.

Illumination and Shading:

Phong Shading

Phong shading

The ultimate main job of firing the rays into a scene in the first place is to determine what colour the pixel in our image should be. This is found by looking at what a ray hits when fired into a 3D scene. Put simply, if it hits a red sphere, the pixel is set to red. We can define the material information for every object in the scene in similar fashion to how we know in the real world that a matt yellow box reflects light. Technically, the box is yellow because it reflects yellow light, and is matt (not shiny) because it has a microscopically uneven surface (diffuse) that scatters the light more evenly away from the surface. Compare this to light hitting a smooth (specular) surface, most of the light would bounce off the surface in the same direction and appear shiny to our eyes. Clearly, for computer graphics, we are not likely to program a surface material literally in such microscopic detail as to define if it is rough or smooth, but we can cheat using a popular and effective local illumination model such as Phong, essentially using the ‘normal’ of a surface, the directions of our light source and camera and some vector math to put it all together and calculate the colour of the surface based on it’s material and angle, creating a smooth shaded object rather than a ‘flat’ colour.

Intersections, Distance Functions and Ray Marching:

Implicit Functions

So we know why we need to fire the rays, but how do know a ray has hit a surface? There’s a few different ways this can be done, all down to the complexity of the geometry you’re trying to render. Ray intersections with simple shape such as planes or spheres can be calculated precisely using linear and quadratic equations respectively. Additionally, for complex explicit 3D models made from triangle mesh, linear algebra and vector math can also be used to compute the intersections.

Another technique, has been gaining popularity in recent years, despite been around quite some time in academic circles. Rendering complex implicit geometry using ‘distance functions’ with nothing but a pixel shader on your GPU as shown on websites like Shadertoy have popularised a subset of ray tracing called ‘ray marching’, requiring no 3D mesh models, vertices or even textures to produce startlingly realistic real-time 3D renderings. It is in fact, the very freedom from mesh constraints that is apparent when you observe the complex, organic and smooth ray marched geometry possible using the technique. Ray marching allows you to do things you simply cannot do using explicit mesh, such as blending surfaces seamlessly together, akin to sticking two lumps of clay together to form a more complicated object. Endless repetition of objects throughout a scene at little extra cost using simple modulus maths is another nifty trick allowing for infinite scenes. By manipulating the surface positions along cast rays, you can effectively transform your objects, twist, contort and even animate; it’s all good stuff.

The Dissertation Project:

My development project was comprised of two parts, a prototype phase to create a ray tracer using GPGPU techniques and a hefty report detailing the theory, implementation and outcomes. For those unfamiliar, General-purpose computing on graphics processing units (GPGPU) is a area of programming aimed at using the specialised hardware found in GPU’s to perform arithmetic tasks normally carried out by the CPU, and is widely used in supercomputing. Though the CPU hardware is singularly much more powerful than the processors in a GPU; GPU’s make up for it in sheer numbers, meaning they excel and outperform CPU’s when computing simple highly parallel tasks. Ray tracing, is one such highly parallel candidate that is well suited to GPGPU techniques and for my dissertation I was tasked to use NVIDIA’s GPGPU framework called CUDA to create an offline ray tracer, done from scratch using no existing graphics API. Offline rendering means not real-time, and is clearly unsuitable for games, yet is commonly used in 3D graphics industry for big budget animations like those by Pixar and DreamWorks, with each frame individually rendered to ultra high quality, sometimes over a period in excess of 24 hours for a single frame.

In the end I produced four different ray tracing prototypes for comparison, incorporating previously mentioned techniques. Prototype 1, running purely on a CPU single thread using simple implicit intersections of spheres and planes. Prototype 2, the same but implemented using a single CUDA kernel and running purely on the GPU across millions of threads. Prototype 3, a CPU ray marcher using distance functions to render more complex implicit geometry. Prototype 4, the same as 3, but implemented using CUDA. My aim for the project was to assess GPGPU performance and the rendering qualities of the ray marching technique, the findings of which can be found in the report.

I knew when I picked this project that I was not taking an easy topic by any stretch, and a great thing I can take away from this is the extensive research experience and planning needed to simultaneously implement many different difficult concepts I had no prior knowledge about, yet still managed to produce a cohesive project, and fully working prototypes, achieving an 88% mark for my efforts, which I am very pleased with. As expected, with heinsight there are things that I would do differently if repeated, but nothing too major, and really, it’s all part of the learning process.

Ray tracing, ray marching, GPGPU, CUDA, distance functions and implicit geometry were all concepts I had to pickup and learn. I bought some books, but in the end, research on the internet in the form of tutorials, blogs, academic papers and lectures proved more beneficial. Sometimes, it takes a certain kind of way to present the information for your brain to ‘click’ with certain principles, and all of us are different. The Internet is a treasure trove in this regard, if you spend the time, you can usually eventually find some explanation that will suit your grey matter, failing that, re-reading it a million times can sometimes help!

Future Plans:

On the back of this, I will be continuing this subject into my masters degree and will likely be pursuing this further during my masters dissertation. I am already busy at work on a real-time implicit render with UI functionality running in DirectX 11 (A couple of early screenshots above). Additionally, I’d love to get a chance to contribute to a research paper on the subject, but we’ll see.

I plan to make some easy to follow tutorials on implementing ray tracing and ray marching at some point for this website, when I get the chance. Hopefully, they could  help out other students or anyone else wanting to learn the aforementioned topics. I know first hand and from friends, that at times it can be frustrating since although there is theory out there, there is comparatively very little information on actual implementation details for the subject, when compared to say pipeline-based rendering.