Procedural RPG World Generation

Featured

Having now completed my MSc, below is a brief summary of my dissertation project along with galleries and a video of the prototype. There’s also a download of the full report detailing the implementation process along with background on the topic for those interested in procedural content generation or studying something related.

Report:

712 Downloads

Video:

Since the days of Rogue, and Elite, games have utilised various procedural content generation techniques to create game worlds for players to explore, freeing developers from the hand-crafted approach typically seen in the majority of games. For me, it was the second Elder Scrolls game, Daggerfall in ’96 that inspired me enough to prompt this topic choice for my MSc dissertation project. Although Daggerfall was most certainly a flawed game, the sheer size of the game world is still unsurpassed even today, being roughly 162 square kilometers (about half the size of the Great Britain) and featuring over 15,000 towns, villages and dungeons. An amusing rumor is that it’s so big that you can fit every other subsequent Elder Scrolls game world into a pixel on Daggerfall’s world map.

When you have a game world that big, procedural content generation (PCG) is the only feasible way to populate it. Daggerfall’s world was generated ‘offline’ and shipped on the game media, making the world the same every time you played it. It’s main story-line areas and characters were hand-crafted, but the rest of its towns, dungeons and wilderness areas were all generated.

Scale comparison of the Elder Scrolls games.

Scale comparison of the Elder Scrolls games.

What I wanted to do, is to tackle a project that aimed to generate an RPG world in real-time so each world would be unique, and ultimately create an explorable 3D RPG world generator. What I actually wanted to do was create a full RPG game to play within these generated worlds (i.e. my dream game), but clearly this would never have been feasible in the time-frame and so I settled for a compromise by removing any game mechanics or AI from the project, effectively stripping out the ‘game’ aspect. Even with this, the project workload was going to be ridiculous considering I wanted to use my own DirectX engine and use it to generate the world, complete with dungeons, NPC towns and a day/night cycle.

Unlike most of my previous projects, there wasn’t going to be much focus on graphics and that actually fit nicely with my retro vision for a more modern looking Daggerfall-esque game, complete with sprites…lots of sprite.

My report can be found at the top of this post if you’re curious about some of the techniques I used in the prototype. I had little knowledge of how other games have really approached this from a technical point of view, other that what I had uncovered during my research on the topic. The developed prototype is therefore very much my own approach.

Since, the detail is all in the above report, I’ll just briefly mention some of the techniques the prototype involved:

The world generation itself was created using a procedural noise technique to generate a height-map. Multiple octaves of value noise are combined (Fractional Brownian motion) to create a resulting fractal noise suitable for generating realistic terrain formations. The noise implementation I used was specifically Voronoise, a method that combines a value grid-based noise type and a ‘jittered’ grid version of Voronoi (cellular noise) into an single adjustable function. I introduced a seed value into the noise generation to allow for reproducibility of worlds, given the same seed. The height-map is output in the pixel shader to a render target upon generation, and then used during the tessellation shader stages via patch control-point displacement when rendering the world.

fBM3

Summation of noise octaves.

terrains

A variety of generated worlds.

The prototype’s generated world size is not huge like Daggerfall, but it’s a fair size at around 16,777 square km. That’s a little under half the size of Skyrim’s world for example, but for a little prototype I’m happy with this and it still allows plenty of explorable terrain using the appropriate movement speed and not the super fast one as seen in my video!

Dungeons use a completely different generation method that I implemented off the top of my head after looking into various techniques. It’s an agent-based technique that uses diggers to burrow out corridors and rooms, with various rules thrown in to keep them in-check to ensure they generate sensible looking dungeons. They are also responsible for spawning the dungeons contents which include monsters and treasure chests and the up and down stairs. Here are some ASCII representations of the dungeon layouts generated by the method:

dungeons

The world is divided up into 32×32 terrain chunks that are each responsible for hosting their respective game objects such as flora, fauna, towns and dungeon entrances. For performance purposes frustum culling was a necessity due to the large scale of the terrain, and only chunks visible in the frustum are processed. Each chunk has a chance of creating towns and/or dungeons and checks such as suitably flat terrain are important factors in determining this. Each building performs a suitability check on the terrain mesh at a chosen spot to see if its within the gradient threshold, and if so places a random structure. If enough buildings are present in a town, NPCs will spawn within proximity of the town.

I added a few small graphical enhancements to the game such as faked atmospheric scattering, fog, layered sky domes, water and emission mapped buildings at night. They are each detailed in the report, but ultimately time was limited and any graphical enhancements were really a secondary concern. Despite this, I really wanted to add them and I think it does enough to achieve the overall atmosphere that I had envisaged, as demonstrated in the below comparison with a Daggerfall screenshot:

DaggerfallComparison

Aesthetic comparison between Daggerfall (left) and prototype (right).

The prototype initially starts into the table view where a map of the generated world is shown that can be rotated and zoomed in/out for examination. At a key press the camera moves into the first-person mode and plonks the player into the world. Worlds can be generated in first-person mode but it’s much more intuitive to do it in the table view. By tweaking the various settings in the UI i.e. noise values, town frequency and tree density; worlds can be tailored to whatever style you want, although currently you have to understand each of the noise settings and their influence on the generation process, to create something you have in mind. Failing that though, there’s trial and error. Ultimately, I’ll add predefined terrain settings that can be selected to simplify this process since it’s really not intuitive to know how ‘lacunarity’, ‘gain’ or ‘frequency’ for instance will effect the world, but academically, it’s quite useful to have them directly tweak-able. A seed value can be directly entered into the UI, with every unique value resulting in a unique world.

I hope at some point to continue with the project. There will be a hiatus for the foreseeable future while I work on other things. There is near infinite scope for the project, with so many things to add so it’s likely something I can keep coming back to.

I also produced a nifty tool for visualising noise which could have various uses for demoing. I’ll probably get this uploaded with a download of the prototype itself at some point.

As detailed in the report, the prototype uses various art assets (models/textures) sourced online via Creative Commons license. The project is for non-commercial use and many art assets are effectively placeholders used to finish the prototype during my studies.

 

 

Update and MSc Results

It’s been 8 months since my last post and since then I have become a dad, completed my masters, relocated and become a programmer in the games industry! So, there’s been much to talk about but little time to do it. It’s been a crazy year.

Work is keeping me extremely busy, as is family life, so with what little time I do get I tend to try and keep my hand in with gaming. Having said this, I have a large backlog of games to play through including Fallout 4 which I’ve yet to even install. Due to all the above, this blogs been a little abandoned, though it’s served a very useful purpose of helping to display my portfolio and get me a job, something I’d strongly recommend any aspiring game developer to do. I’ll endeavor to post more now I have my weekends back and hopefully useful things and not just…stuff? Hopefully I’ll be getting back into some hobby programming projects I’m wanting to do such as some WebGL ray tracing stuff for this site, and I’m sure I can put some good tutorials together that will benefit all.

So, university then. It’s over. Done. 4 years of very hard toil and the question is was it worth it? A resounding YES, is the answer of course. I’m lucky now that I’m in a position where 4 years ago I was hoping to be, building invaluable experience working in industry.

The University of Hull has provided an excellent place of learning over the course of my BSc and MSc and importantly opened the doors needed to get me into what is a highly competitive industry. I’d like to thank all of the lectures, supervisors and staff that I’ve worked with over the years who made it a very positive experience. A good university is of course important in determining how much you take away from your time studying, but I will say that THE most important thing is your determination and self-motivation. You can coast through a CompSci degree, and take very little from it. Hopefully my grades demonstrate the fact I put my all into it, and at times, particularly in the MSc, the work load was intense. Intense like driving home at dawn from the lab having done 16hrs of red bull and vending machine fueled programming, knowing you need to get back to the lab in a few hours to do it all over again.

MSc Results:

Here are the results as per the University’s module results site:

With an overall average of 89.9%, this means I should be comfortably in the distinction category for my masters which I am thrilled about.

The big module for the MSc is the dissertation project and having done a pure graphics project for my BSc in CUDA ray tracing, I decided to suggest my own topic this time around and decided upon procedural content generation in RPG’s, a subject I have long been fascinated with. The scope of the project was massive, including the design and implementation of my own 3D DirectX11 engine and the creation of an explorable procedurally generated world with procedurally generated dungeons. Needless to say, the process nearly killed me and the report writing was also tough going since I was moving house with a new born and also working full-time! Considering all this, I achieved a great deal of what I had set out to do, as well as surprising my two supervisors quite considerably when they saw just how much I managed to get done!

I’ll be aiming to make a separate post regarding the dissertation project soon, as well as putting together some sort of video of it to complete my degree portfolio.

The project I’m currently working on at work is really exciting and I wish I could talk about it, but unfortunately I can’t…yet. I can say that since starting work I’ve done some business orientated Objective-C, worked with Unity and on my current project, I’m working on a very large mixed code-base of mainly C with bits of C++. Lets just say I’m glad I took note of all that hex, bit masking and bit-wise operations you can easily not pay attention to at Uni, despite being very much absent in more modern managed languages and coding styles.

 

 

 

 

Hybrid Rendered Dragon Scene (Ray Marching, Forward Rendering)

Featured

This is a quick run down on my Advanced Rendering coursework submission. It uses my own renderer using C++ and DirectX 11. Below I’ll basically post the report contents that I submitted with the code which details how each effect has been implemented.

Effect Descriptions

Effect 1: Chamber Room Environment

The chamber walls and ceilings were ray traced by ray marching implicit geometry using distance functions.

The walls and ceiling are done inside the pixel shader on an screen sized quad. I then perform a second ray tracing pass for the interior pillar geometry. I did this in a separate pass in order to be able to blend the geometry in the correct order i.e. the pillars needed to sit on top of the forward rendered floor which meant I would need to render first the walls, then the floor  and finally the pillars. The hybrid ray tracing and forward rendering passes were combined in the scene using blending.

The structure is comprised of 4 large radius spheres for efficiency. The texture and bump-mapping effect is done via ray tracing a texture lookup and modifying the distance function to adjust the intersection point on the ray based on the texture sample.

All lighting in the program is done based on the ‘Blinn-Phong’ reflection model.

Effect 2: Animated Dragon

The dragon is a forward rendered basic mesh model with texture-mapping and shading. The dragon is animated via the vertex shader performing multiple different motions of local body parts. The tail sways up and down and the neck and head move gently but differently from each other. Breathing was also emulated on the dragon’s torso and throat.

The animation aims to give the impression of a living, breathing creature guarding its treasure horde. The animations themselves were performed by passing in a timer value to the vertex shader and using ‘smooth step’ functions of time, sine and cosine.

Normal bump-mapping is also implemented using a separate normal map texture.

Effect 3: Four Bumpy Stone Pillars

Similar to the walls and ceiling, a separate ray tracing pass was done for the stone pillars. Four capped cylinders were defined using distance functions. The parallax bump -mapping was done in the same way as before.

Effect 4: Geometry Shader-based Particle Systems

9

Both fire and smoke particle system effects were created on the GPU using the geometry shader. The systems are created from a base mesh model of a cone  (procedurally generated). Each cone vertex is input individually into the geometry shader which then creates an additional 3 vertices to form a quad, effectively transforming the cone into a quad array. The resultant quad is bill-boarded to ensure it is always facing to the camera.

The particle systems are animated using functions of time, sine and cosine inside the vertex shader.  The fire system uses additive blending. The smoke particles use an alpha fade to make them appear transparent.

The centre fire can be toggled to show the original preserved shape using the  ‘FireShape’ UI variable.

A mesh model of a wall torch was used to contain the fire and smoke particle systems for each pillar. The torch is forward rendered and features normal bump-mapping. An additional central fire inside a torus brazier was also added.

Effect 5: A Procedural Bumpy Floor

The floor is made from a single quad primitive input into the tessellation  stage of the shader pipeline (hull and domain shaders). The quad is tessellated in a triangle domain using a variety of partitioning methods changeable via the UI. The domain shader also perturbs the height of the floor using a ‘smooth step’ function based on the coordinate of the tessellated triangle patch, sine and cosine.  The normals are also recalculated by processing two adjacent positions with the same function, calculating a slope for each and normalizing them.

View dependent tessellation is implemented inside the hull shader based on the camera distance from the floor plane. The closer the camera is, the more triangles are tessellated.

Effect 6: Ellipsoid and Torus using Tessellation Shaders

Both the dragon egg and brazier are made from single points that are input into the pipeline and converted inside the domain shader using parametric representations of an ellipsoid and a torus. This is done by ‘wrapping’ the patch UV coordinate space around the respective shape.

Effect 7: Dragon Tail Spikes

18

The dragon tail spikes were created inside the geometry shader by calculating  a single new centroid vertex and utilising the existing vertices to form three new triangles faces. The effect was localised to just the tail using the world position of the vertices.

Extra Features:

Extra features include a strong wooden door made by texturing and bump mapping a quad. I also added some precious gem stones to the floor made the same way as the egg (parametric ellipsoid) but tessellated much less to make them look more geometric.

4

This coursework took my in the region of 2-3 weeks including research and learning the more advanced shader pipeline stages such as hardware tessellation and geometry shaders. Blending the scene components together was quite a headache and there are some noticeable blocky bits around the particle systems when they over lap caused by some issues I had blending everything together. Despite this it was a great learning opportunity for some of the more advanced forward rendering techniques and luckily my past experience with ray tracing helped a great deal. In the end I received a mark of 96% for it.

3D Pinball Game – Development Project

Featured

 

This is a 3D pinball game developed as part of my MSc Computer Science. The module was a group project and we were tasked with developing a 3D pinball without using an existing propriety game engine (such as Unity or Unreal etc.).

I developed an easy to use DX11 renderer for use by the group and we incorporated the Bullet physics and FMOD libraries to put the game together.

The time constraints on the project were intense and so this was put together in around 10 days (some crazy hours ensued). Many cans of energy drink and cups of coffee later this was the result. Its not exactly pinball FX but factoring in timeframe and tool constraints, I’m pleased with how it turned out. I wouldn’t expect a public release any time soon though!

Gallery:

Bullet physics is pretty fiddly to get up and running and took a bit of research to get to grips with. As with most open source libraries there are many conflicting sources of documentation and versions floating around which often serve only to confuse, but for a free physics library you can hardly complain.

I worked on quite a bit of the project, putting together the renderer and framework that the group used for production. I programmed the graphics, did any required artwork (base textures were sourced online) and worked a lot on the important physics such as the flippers and launch mechanism. With more time we could have improved quite a bit, as it stands the physics aren’t on a fixed time step and neither is it on an independent thread, therefore bad things happen if the frame rate gets low. For this reason it’s designed to run more or less perfectly on the system we developed it on and we were marked on, but it would need a fair bit of improvement to get it working nicely on any system and I doubt I’ll have time for that any time soon.

The project was probably my first real taste of game dev crunch or ‘death march’. Really it was worse, with 16+ hour days, often leaving the lab after sunrise. In the end, I think it was worth it though and I had actually always quite fancied trying my hand at developing a pinball game!

PS. Thanks to the guys (and gal) for such a hard-working group.

Falling Object Simulator – Simulation & Concurrency

Featured

 

As part of my MSC Computer Science degree, the Simulation & Concurrency module was probably the most intense module on the course, tasking us to produce a physics engine from scratch with robust network and multi-threading integration in order to implement a simulation of balls falling into a box, with removable trays and also a cloth simulated net.

Having done only a little previous physics programming for ‘The Column’, I set about researching the topic since implementing a solid and robust physics engine is no trivial task, even without a networking element. Although I found several good sources, for specific elements, Ian Millington’s ‘Game Physics Engine Development’ was an excellent book that covered many aspects of getting a basic physics engine up and running. I promptly devoured about a third of the book during this project though it lacks any real depth on collision detection and doesn’t real cover cloth simulation as I recall.

In the end I received 87% for the ACW which I’m pleased with. With more time I would have implemented rigid body motion but this second semester of the MSc has been pretty insane in terms of work load, mainly due to the fact that the UK carries out MSc degrees in a single year, rather then 2 like everywhere else in the world! Additionally, the University of Hull’s MSc degree is extremely practical, which although I find preferable to more theoretical based degrees (how better to learn then via implementation?) does result in a heavy work load. The good side is that if you put the work in, your get an extensive portfolio at the end of the degree.

Project Description:

The result of the project was a multi-threaded interactive falling object simulator developed from scratch using C++ and DirectX 11. The physics engine is a mass aggregate system using particles i.e. no rigid body motion. It features simple sphere and plane based collision detection and interpenetration resolution.

Each tray features different friction and elasticity attributes as per the specification.

An advanced feature is the cloth simulation for the net made using a lattice of spring constraints (Hook’s law) with four anchored corners.

Net collision detection is made using small spheres mapped to the vertices of the net, this however means I had to make the springs quite rigid to stop balls from forcing their way in-between the vertices hence the cloth is not very fluid or fluttery.

Without rigid body dynamics to get the cube rotations I used rod constraints connected to each vertex of the cube. This is a simple way to get rotations using just particles.

Rendering and physics integration are performed on separate threads, with an additional 3 threads for handling network. Rendering frame rate is hence independent from the simulation and both can be changed to run at a specific target rate.

Not shown in the video, but being a significant part of the project is the peer-to-peer network aspect. The program can be run on 2 peers, each peer will communicate and synchronise the simulation using linear interpolation of the scenes physics data. Each peer can be interacted with i.e. camera can be moved independently (think multiplayer) and commands such as open/close tray and spawn ball is communicated across the network to each peer. Network coding was done using Winsock. UDP broadcasting was used purely for peer detection and TCP for data transmission. Packet loss and latency resilience was also implemented.

Sandy Snow Globe – Deferred Shading

Featured

 

For the Real-Time Graphics module as part of my MSc in Computer Science we were tasked with developing a real-time graphics application representing a snow globe but with a few added twists. Instead of a wintery landscape, the theme would be desert with specific requirements including a day/night cycle, seasonal effects, shadow mapping and particle systems. Additional marks would be awarded for various advanced features, the highest being deferred shading. Having always wanted to try my hand at implementing it I went about researching the topic.

I implemented the project using my own engine I have been developing during my MSc written in C++ and utilising DirectX 11. The snow globe features deferred shading, particle systems, blending, PCF filtered shadow mapping, normal bump mapping, height mapping and environment mapping. The Snow Globe has a simple day/night cycle via two orbiting directional lights (Sun and Moon) and alternating summer/winter seasons. Summer nights = fireflies, winter nights = snow. Each firefly has a point light and using ‘deferred shading’, significant numbers of lights can be processed while maintaining good performance.

‘Deferred shading’, particular for non 3D programming experts, can be a rather tricky concept to grasp fully and so please find below my own attempt at describing what deferred shading is and why its a really cool technique.

Deferred Shading: Overview

‘Deferred Shading’ is a multi-pass rendering technique that has the distinct advantage of deferring the scene lighting to a second pass meaning put simply the calculation becomes one of a 2D domain rather then 3D. Usually with standard forward rendering, lighting is calculated in the pixel shader for every interpolated fragment after processing in the vertex shader. This means that every geometric object in your scene will be required to perform the lighting calculations which in ‘Big O’ notation looks like O(lights * meshes). The wonderful thing about deferred shading is that by using just one extra pass we can reduce that to O(lights + meshes) or to look at it another way in terms of fragments, we can reduce it from O(lights * geometryFragments) down to O(lights * screenFragments).

Deferred Shading - Sandy Snow Globe

Deferred Shading – Sandy Snow Globe

 

This has massive implications for performance. With forward rendering, more than half dozen or so light sources is enough to seriously impact performance, though modern games generally get away with this number by limiting how many are visible at a time. Deferred shading however as demonstrated in the above video can handle many times that amount of lights simultaneously with little performance impact. For the coursework I demonstrated a scene with 100 point lights which although pushed the GPU a little, still ran comfortably at over 30 FPS.

There are multiple deferred rendering techniques with ‘deferred shading’ being a 2 pass solution unlike ‘deferred lighting’ which introduces a third pass. Basically, for systems with lower GPU memory such as old-gen console hardware, ‘deferred lighting’ is preferable since it allows the size of the ‘G-buffer’ to be smaller because of the extra pass. Deferred shading is a simpler and more elegant solution but does require a larger ‘G-buffer’ and hence is better suited for ‘beefier’ GPU hardware.

DesertGlobe2

How it Works?

Deferred shading works as described using two separate rendering passes. The first pass is called the ‘geometry pass’ and works similar to a normal pixel shader carried out in forward rendering, except instead of outputting to the back buffer, we output to a selection of render targets, collectively referred to as the ‘G-buffer’. Each render target stores specific scene information so that once fed into the second ‘lighting pass’ the correct lighting calculations can be performed. Exactly what information you store in the ‘G-Buffer’ is fairly flexible although at a minimum you will require 3 buffers for colour data, normal data and preferably depth information. I say preferably because you could instead choose to store the 3D world position but this results in storing superfluous information since by using just the depth information we can reconstruct the 3D world position for each screen pixel later at a much cheaper memory cost (1 vs 3 floats per pixel).

As a bonus, when it comes to the ‘lighting pass’, you can further enhance performance by computing lighting on only the pixels that are effected by a particular light, by representing the light as a basic primitive based on its type. A full-screen quad for a directional light, a sphere for a point light and a cone for a spotlight.

DesertGlobe3

What’s the Catch?

Blending:

This brings us to the added complexity of deferred rendering. Because we effectively flatten the scene into 2D inside our buffers, we lose the depth information from the scene, meaning when it comes to blending operations such as those used in transparency, it’s hard to know in which order the scene should be arranged. There are however a few solutions to this including manually depth sorting your geometry and rendering in a ‘painters algorithm’ fashion, or even simpler, rendering your transparent objects in a separate forward rendering pass and blending, which is how I achieved the transparent snow globe.

Materials:

Because every object is encoded inside our ‘G-buffers’, any info about the scene that isn’t in there, the lighting pass will simply not know about. This presents a problem for geometric material properties because normally these would be passed into the shaders on a ‘per object’ basis via constant-buffers (DirectX), but because our ‘lighting-pass’ will only run ‘per light’ and not ‘per object’ we have no way of assigning the required material to the objects. One simple solution to this is to use a material ID value and throw this into one of the existing buffers like the colour buffer and then define a material array inside the lighting shader utilising the ID as an index.

Overall I’d implement deferred shading for any project in the future where time is not a concern as it does slightly complicate things but the benefits more than make up for this. If your game or 3D program doesn’t need more than a few lights then its not something that is strictly necessary however many modern games are already using deferred rendering techniques to enhance scene lighting. I’d also say if you can successfully implement deferred shading and understand the technique then you have gotten to grips with one of the more advanced multi-pass rendering techniques and this brings with it an enhanced understanding of the graphics pipeline.

 

Cross-Platform Game Engine

Featured

In the first semester of my MSc Computer Science degree as part of the Games Development Architectures module we were tasked to design and implement a cross-platform game engine. A game would also be made using the engine.

The chosen platforms were a Windows PC and Windows Phone 8 device. I decided that considering Microsoft had developed a Universal Application framework for targeting both of these, I would utilise it. This was good from the point of view that it simplified the cross-platform compatibility, but introduced a few limitations (namely having to work with the Windows RT platform and resultant consequences for dealing with inputs via ‘ref classes’ etc.. Coming from experience with Win32 desktop programs, Windows RT feels very different to program for and much less flexible, but then again Win32 really does need some modernisation.

4

Project Details:

  • Engine coded in C++ (Visual Studio).
  • DirectX11 rendering engine component coded from scratch.
  • HLSL shaders.
  • The Universal App framework used to contain the code solution and deploy to both platforms.

We were given a design specification for a simple game called ‘Tunnel Terror’. It involved the player having to control a vehicle/object through a tunnel, avoiding various obstacles. The speed would gradually increase the longer the player survived and any collisions with obstacles would result in death. Score was determined by length of survival. I decided to add various extras including power ups such as coins and a randomised speed-up/slow-down. The game would need to play on both a PC and Windows Phone 8 device, allowing for the differing input controls to play. I decided the PC would utilise keyboard whereas the phone would rely on the accelerometer (tilt) sensor to manoeuvre the player through the tunnel. The PC also required a 2 player mode. Main menu, high score table and game over screens would be needed as well as Multiple camera modes such as first-person, third-person and death fly-by cameras.

3

Although marks were given for the game implementation and extra features, much of the module was graded based on the engine design, implementation and accompanying report. My report justified the design based on four principles of games architecture, namely ‘Simplicity’, ‘Reusability’, ‘Abstractness’ and ‘Modularity’. Below is an example of the UML design used for my engines platform independent rendering component, with examples given to how behaviour could be derived for both DirectX and OpenGL.

Renderer

In the report we also had to research how we would have implemented the game on next-generation architecture such as the PlayStation 4 and how the engine would deal with the addition of different kinds of input devices.

There were some marks awarded for graphics quality and since the target platforms were both Microsoft, DirectX11 was used for the graphics. I implemented normal bump mapping to give it a nice look when flying down the tunnel. I also randomly changed the textures of each tunnel section and reset them to the end of the sequence once passing behind the frustum to give the impression of an endless tunnel with non-repeating sections.2

Annoyingly because the game is a Windows Store application there is no runnable executable so without actually publishing it to the Store and getting past all the certification requirements I cannot put it up anywhere to play! What is worse though is that currently I know of no screen capture software that can even record footage of the game running (at a decent FPS), both Fraps and Bandicam do not capture it since it’s not a desktop application. Bandicam does have desktop capture support but this also didn’t seem able to see the game and is not suitable for high frame-rate applications. So, as it stands I can’t make a video of the game running without hardware recording. Hopefully, this is something that won’t always be the case.

I was very pleased with the final engine and received a 92% grade for the module. I have since improved upon it and reused design elements for subsequent modules such as Real-time Graphics. I think a lot of what I coded for this project will be extremely useful going forward.

 

 

 

Bit’s Blitz – Puzzle Game

Bit's Blitz - Puzzle Game

Bit’s Blitz – Puzzle Game

In the third year of my Computer Science BSc (2013) as part of the Commercial Games Development module, we were placed into groups and tasked to produce a computer themed game designed for children. Each of the group members had to produce a game design document, one of which would be chosen for the group to develop. My group consisted of me, Aaron Ridge, Michael Killingbeck, Andrew Woodrow, Joshua Twigg and Alex Lynch.

The group decided to go with my game design which was inspired by the classic puzzle game Chip’s Challenge, with the idea being to reimagine it and modernise the graphics.

Game synopsis:
 “‘Bit’s Blitz’ is a fun 2D puzzle game following the escapades of its protagonist ‘Bit’. The game takes place across a series of levels increasing gradually in difficulty, gradually introducing new game-play elements. The player controls ‘Bit’ around a grid, constrained by a series of maze-like blocks and hazards. ‘Bit’ must successfully collect all the computer components that are scattered around the level and then repair his computer to proceed to the next level.”

Details:
Developed using C# and the XNA framework for the PC platform (Windows XP+).

 

The nice thing about this game design was that we could focus on the puzzle aspect of the game, time and imagination permitting, due to the simple overhead on technical implementation. The tile-based game engine was written from scratch using XNA, utilising XML data structures to store level data and a custom made loader. A cool and free little program called Tiled was used to ‘paint’ the level layout and export it into our XML format. I’d strongly recommend this to any considering 2D tile-based games for constructing levels, having said that, it’s a nice programming exercise to develop your own editor if you get the chance.

All gameplay aspects including animations and particle systems were programmed for the game, using no other libraries except XNA. I designed the game framework based on the State Design Pattern which worked out really well and continue to use it for game development.

With the use of XML and Tiled it allowed us to churn out level designs at an alarming rate and the final product has over 20 levels! Not bad considering the 2 week development time. When giving the presentation of the game, we literally only had time to demonstrate about 5 of the best levels, odd considering level variety tends to be in short supply for prototypes.

Sound effects were added (free assets) however I’ve removed these from the video and added music since honestly, they weren’t brilliant! The above gameplay video demonstrates various levels (played by me). I could barely remember most of the levels so it’s pretty much a blind play-through with some genuine mistakes.

For the project we all chipped in and the group worked well together. The game was never released or published anywhere, though if anyone is interested I could stick the executable on here for download.

Halloween Pumpkin’s – GLSL Programming

 

For the Advanced Graphics module as part of my BSc in Computer Science, we were tasked to create a 3D scene with a theme of a ‘Halloween Pumpkin Party’. The scene was produced using RenderMonkey and programmed via GLSL vertex and fragment shaders.

The scene displays a variety of shader effects including: Cube mapping, displacement mapping, height bump-mapping, parallax bump-mapping, fragment based-lighting, particle systems, texture bill boarding, smooth-step vertex transformations and stencil masks.

Below is a brief description of each component of the scene and how it was implemented.

Enviroment

Cube Mapped Skybox

I created a new cube map using several textures by creating a DDS file using the ‘DirectX Texture Tool’. The cube map was then applied onto a cube model in RenderMonkey.

Terrain Displacement Map and Height Map

Terrain Displacement Map

Terrain Displacement Map

The terrain features texture displacement mapping, a height bump map and fragment lighting. It was made using a single tessellated plane with a terrain texture. In the vertex shader I displaced each vertex along its normal using the texture colour values. I applied a uniform coefficient to control scaling.

A separate texture is used for bump mapping to create a grass effect. The height map was done by transforming the view direction and light direction into tangent space via a matrix. In the fragment shader, I retrieved the height map data, calculated the difference between two pixel samples and determined the normal for each fragment. All other objects that use height bump maps in the scene are done the same way.

Dispersed Fog Particle System

Fog Particles

Fog Particles

The fog is implemented using a particle system and quad array. A time coefficient is first calculated and then another coefficient used to progressively spread the particles apart from each other. Each quad in the system is ‘bill boarded’ to always face the view, which is achieved using the inverse view matrix. The fog colour transitions across the texture by decrementing it’s coordinate using the timer resulting in multi hued particles. A smooth fade is added around the edge of each quad to help it blend better. By increasing the size of the particles, lowering the speed and extending the particle system range, I created the above effect.

Fireworks Particle System

Firework Particle System

Firework Particle System

The fireworks use the same principles as the fog except using a different algorithm. All particles start on top of each other, ascend into the air, and then spread apart, slowly drifting down. This is achieved by setting an initial velocity, it then checks if each particle is below the explosion threshold. If it is, it increments the particles with positive velocity. If not, it decrements the particle by the negative velocity and spreads them apart over time.The particles slowly fall back down.

Pumpkins

Pumpkin 1

Cube Mapped Pumpkin

Cube Mapped Pumpkin

Features:

  1. Cube mapped.

Each fragment is coloured using a reflection vector to access the texture data from the cube. The shape is a 3D model.

Pumpkin 2

Parallax Bump-mapped Pumpkin

Parallax Bump-mapped Pumpkin

Features:

  1. Parallax Bump Mapping (normalheight map)
  2. Non-uniform vertex transformation light flickering.
  3. Flame bill board.
  4. Fragment lighting.
  5. 3D model used.

The parallax bump-mapping gives a nice bumpy surface using a simple brick texture. The is effect achieved in the fragment shader by retrieving the normal and height texture data and then correcting the texture coordinate.

I created a nice lighting effect to simulate flickering flame light. It works by displacing the normal slightly based on a sine function. This is done on all flame pumpkins.

Flame

Flame billboard

The pumpkin flame is created using 3 different textures, a shape , colour and a noise layer. The vertex shader billboards the quad and in the fragment shader, the shape layers are animated and transformed.

Pumpkin 3

Stencil-masked Spherical Pumpkin

Stencil-masked Spherical Pumpkin

Features:

  1. Stencil masked cut-out holes.
  2. Smooth step transformation from a sphere. Top is removed.
  3. Height Bump Mapping.
  4. Non-uniform vertex transformation (breathing, veins swelling, light flickering).
  5. Flame bill board.
  6. Fragment lighting.

The face is made using holes that are cut out using a simple face texture as a stencil mask and then discarding fragments. The pumpkin shape is made from a basic sphere that has been stretched and the top removed in the shader.

A breathing effect has been added where the veins on the texture swell when the pumpkin exhales, this is achieved by applying a sine function to the bump normal. The breathing is done using a ‘smooth step’ sine function on the lower vertices.

Pumpkin 4:

Glowing Pumpkin

Glowing Pumpkin

Features:

  1. Glowing eye and mouth holes via blended billboard.
  2. Glowing aura via billboard texture.
  3. Non-uniform vertex transformation light flickering.
  4. Fragment lighting.
  5. 3D model used.

The glowing eyes and mouth are made using separate passes. It is done by bill boarding a texture and blending it over the holes. A direction is calculated so that it only glows when it’s looking at the camera.

Pumpkin 5

Transformed an displaced pumpkin from teapot model

Transformed and displaced pumpkin from teapot model

Features:

  1. Smooth step transformation from a teapot. Handle and spout translated inside.
  2. Wings extruded via smooth step and animated.
  3. Displacement mapped spikes.
  4. Hovering animation.
  5. Height Bump mapped fur.
  6. Fragment lighting.

Shape is made by translating the spout and handle vertices inside the pot. The wings are extruded via smooth step to make them curved. The spikes are made by deforming the vertices along the normal based on a texture. The hovering is done by applying a sine and cosine function to the vertices x and z components, the wings are similarly animated.

Gravestones

GravestoneSimple 3D models featuring bump-mapping and fragment lighting.

Summary

The project was challenging and very fun to work on, allowing me to learn many different shader rendering techniques and effects that are a staple in modern graphics and games programming. Using RenderMonkey allowed focus to be directly on shader programming and not the OpenGL framework i.e handling model loading and vertex buffers etc, which made sense considering the limited allocated time for the coursework. I was also very pleased to have received a mark of 90%!

Final Scene

Final Scene

OpenGL Cross-platform PC/PSP Game Coursework

Last semester as part of the Advanced Graphics module of my CS degree at Hull University, we were tasked with a group project to produce a cross-platform OpenGL mini-game for the PC and Sony PSP based on a specification. The game premise was to move around a 3D ‘maze’ consisting of four rooms and connecting corridors, avoiding a patrolling AI that would shoot you if within its line of sight. The objective was to collect 3 keys to activate a portal to escape and beat the game.

The groups were selected at complete random with 4 members. As per usual, group coursework assignments are particularly difficult due to the extra concerns of motivating members and assigning work and by year 3 of University, you get a good idea on the best way of operating within them to secure good grades. I went in with the mindset of doing as much work as possible after we assigned tasks. Hopefully each would carry out their allocated work, if not, I’d just go ahead and do it, no fuss. Luckily one chap in my group was a friend and he did an excellent job coding the AI, mini-map and sound while I worked on coding the geometry, camera, lighting and player functionality etc.

1

Mini-maze model

Static environment lighting

Static environment lighting

Cross-Platform Limitations:

Having worked with OpenGL and shaders last year for my 3D ‘The Column‘ project, it was some-what limiting when I realised that the PSP didn’t support them and that fragment-based lighting was a no go. With one requirement of the game being a torchlight effect that illuminated the geometry, this would therefore mean that for PSP compatibility, vertex-based lighting would need to be implemented and that meant tessellation of primitives to prevent the lighting looking very blocky and…well very 90’s. Luckily the PSP did atleast have support for VBO (Vertex Buffer Objects) which meant effectively each tessellated model could loaded onto the graphics card only once to improve performance.

Unified Code

An interesting aspect of this project was the required consideration for a consolidated code-base that where possible allowed shared functionality for both the PC and PSP platforms i.e limiting how much platform specific code was used. This was essential since the game would be a single C++ Solution for both platforms.

I designed the code structure based around principles Darren McKie (the course lecturer) described, and produced the following class diagram that reflects the final structure:

Unified Cross-platform Class Diagram

Unified Cross-platform Class Diagram

The majority of game code resides in ‘Common Code’ classes that are instantiated by each particular platform ‘Game’ object. Certain code such as API rendering calls were kept platform specific but made use of the common classes where necessary. A particular nice way of ensuring the correct platform specific object was instantiated was carried out using ‘#Ifdef’, ‘#ifndef’ preprocessor statements and handled by a ‘ResourceManager’ class.

As mentioned earlier, per-vertex lighting had to be implemented due to PSP compatibility. A primitive with a low number of vertices would thus result in very blocky lighting. To prevent this I created a tessellation function that subdivided each primitives vertices into many more triangles. I played around with the tessellation depth to find how many iterations of subdivision could be achieved before inducing lag and was very happy with the lighting result considering there is no fragment shader; a given for today’s modern pipelined-based rendering.

Active Portal

Active Portal

The PSP implementation proved more tricky due to getting to grips with the PSP SDK and having access to very little documentation, however the game was successfully implemented onto a PSP device and ran with decent performance after compressing the textures down and removing geometry tessellation to allow for the PSP’s limited memory capacity.

The game was written in C++ and  the following libraries and software were used:

  • GXBase OpenGL API
  • Sony PSP SDK
  • OpenAL
  • Visual Studio 2012
  • Paint .Net