Procedural RPG World Generation

Featured

Having now completed my MSc, below is a brief summary of my dissertation project along with galleries and a video of the prototype. There’s also a download of the full report detailing the implementation process along with background on the topic for those interested in procedural content generation or studying something related.

Report:

712 Downloads

Video:

Since the days of Rogue, and Elite, games have utilised various procedural content generation techniques to create game worlds for players to explore, freeing developers from the hand-crafted approach typically seen in the majority of games. For me, it was the second Elder Scrolls game, Daggerfall in ’96 that inspired me enough to prompt this topic choice for my MSc dissertation project. Although Daggerfall was most certainly a flawed game, the sheer size of the game world is still unsurpassed even today, being roughly 162 square kilometers (about half the size of the Great Britain) and featuring over 15,000 towns, villages and dungeons. An amusing rumor is that it’s so big that you can fit every other subsequent Elder Scrolls game world into a pixel on Daggerfall’s world map.

When you have a game world that big, procedural content generation (PCG) is the only feasible way to populate it. Daggerfall’s world was generated ‘offline’ and shipped on the game media, making the world the same every time you played it. It’s main story-line areas and characters were hand-crafted, but the rest of its towns, dungeons and wilderness areas were all generated.

Scale comparison of the Elder Scrolls games.

Scale comparison of the Elder Scrolls games.

What I wanted to do, is to tackle a project that aimed to generate an RPG world in real-time so each world would be unique, and ultimately create an explorable 3D RPG world generator. What I actually wanted to do was create a full RPG game to play within these generated worlds (i.e. my dream game), but clearly this would never have been feasible in the time-frame and so I settled for a compromise by removing any game mechanics or AI from the project, effectively stripping out the ‘game’ aspect. Even with this, the project workload was going to be ridiculous considering I wanted to use my own DirectX engine and use it to generate the world, complete with dungeons, NPC towns and a day/night cycle.

Unlike most of my previous projects, there wasn’t going to be much focus on graphics and that actually fit nicely with my retro vision for a more modern looking Daggerfall-esque game, complete with sprites…lots of sprite.

My report can be found at the top of this post if you’re curious about some of the techniques I used in the prototype. I had little knowledge of how other games have really approached this from a technical point of view, other that what I had uncovered during my research on the topic. The developed prototype is therefore very much my own approach.

Since, the detail is all in the above report, I’ll just briefly mention some of the techniques the prototype involved:

The world generation itself was created using a procedural noise technique to generate a height-map. Multiple octaves of value noise are combined (Fractional Brownian motion) to create a resulting fractal noise suitable for generating realistic terrain formations. The noise implementation I used was specifically Voronoise, a method that combines a value grid-based noise type and a ‘jittered’ grid version of Voronoi (cellular noise) into an single adjustable function. I introduced a seed value into the noise generation to allow for reproducibility of worlds, given the same seed. The height-map is output in the pixel shader to a render target upon generation, and then used during the tessellation shader stages via patch control-point displacement when rendering the world.

fBM3

Summation of noise octaves.

terrains

A variety of generated worlds.

The prototype’s generated world size is not huge like Daggerfall, but it’s a fair size at around 16,777 square km. That’s a little under half the size of Skyrim’s world for example, but for a little prototype I’m happy with this and it still allows plenty of explorable terrain using the appropriate movement speed and not the super fast one as seen in my video!

Dungeons use a completely different generation method that I implemented off the top of my head after looking into various techniques. It’s an agent-based technique that uses diggers to burrow out corridors and rooms, with various rules thrown in to keep them in-check to ensure they generate sensible looking dungeons. They are also responsible for spawning the dungeons contents which include monsters and treasure chests and the up and down stairs. Here are some ASCII representations of the dungeon layouts generated by the method:

dungeons

The world is divided up into 32×32 terrain chunks that are each responsible for hosting their respective game objects such as flora, fauna, towns and dungeon entrances. For performance purposes frustum culling was a necessity due to the large scale of the terrain, and only chunks visible in the frustum are processed. Each chunk has a chance of creating towns and/or dungeons and checks such as suitably flat terrain are important factors in determining this. Each building performs a suitability check on the terrain mesh at a chosen spot to see if its within the gradient threshold, and if so places a random structure. If enough buildings are present in a town, NPCs will spawn within proximity of the town.

I added a few small graphical enhancements to the game such as faked atmospheric scattering, fog, layered sky domes, water and emission mapped buildings at night. They are each detailed in the report, but ultimately time was limited and any graphical enhancements were really a secondary concern. Despite this, I really wanted to add them and I think it does enough to achieve the overall atmosphere that I had envisaged, as demonstrated in the below comparison with a Daggerfall screenshot:

DaggerfallComparison

Aesthetic comparison between Daggerfall (left) and prototype (right).

The prototype initially starts into the table view where a map of the generated world is shown that can be rotated and zoomed in/out for examination. At a key press the camera moves into the first-person mode and plonks the player into the world. Worlds can be generated in first-person mode but it’s much more intuitive to do it in the table view. By tweaking the various settings in the UI i.e. noise values, town frequency and tree density; worlds can be tailored to whatever style you want, although currently you have to understand each of the noise settings and their influence on the generation process, to create something you have in mind. Failing that though, there’s trial and error. Ultimately, I’ll add predefined terrain settings that can be selected to simplify this process since it’s really not intuitive to know how ‘lacunarity’, ‘gain’ or ‘frequency’ for instance will effect the world, but academically, it’s quite useful to have them directly tweak-able. A seed value can be directly entered into the UI, with every unique value resulting in a unique world.

I hope at some point to continue with the project. There will be a hiatus for the foreseeable future while I work on other things. There is near infinite scope for the project, with so many things to add so it’s likely something I can keep coming back to.

I also produced a nifty tool for visualising noise which could have various uses for demoing. I’ll probably get this uploaded with a download of the prototype itself at some point.

As detailed in the report, the prototype uses various art assets (models/textures) sourced online via Creative Commons license. The project is for non-commercial use and many art assets are effectively placeholders used to finish the prototype during my studies.

 

 

The MSc Departmental Prize

bcs

Several weeks ago I was very pleasantly surprised by a letter from the British Computer Society stating I had won the ‘Departmental Masters’ Prize in Computer Science’ for my MSc degree!

Along with the nice certificate, I also received a cheque for £150 and a years membership to the BCS, making it most certainly the best bit of mail I’ve received through the letterbox for a good while.

BCS certificate

From speaking with the department I believe I was awarded this based on attaining the highest average score and so it’s nice to know that my effort along with the distinction, was doubly worth while. I’m already looking back fondly at my time at Hull uni and choosing to go on and do the MSc was definitely worth it.

On other fronts, I recently posted a video of my MSc dissertation project on my YouTube profile for anyone interested in the subject of procedural content generation. Making the video was a bit of a pain, and anyone who has recorded footage showcasing academic work previously will know where I’m coming from here, in the sense of making it both informative AND interesting. In the end I edited some raw footage of me exploring a generated world and showcased the various features the best I could.

When I get chance, I’ll be adding it to this site to complete my uni portfolio and uploading my dissertation report (basically because a) it took ages to write!  b) the possibility someone might find it useful and maybe even interesting!).

 

 

Sandy Snow Globe – Deferred Shading

Featured

 

For the Real-Time Graphics module as part of my MSc in Computer Science we were tasked with developing a real-time graphics application representing a snow globe but with a few added twists. Instead of a wintery landscape, the theme would be desert with specific requirements including a day/night cycle, seasonal effects, shadow mapping and particle systems. Additional marks would be awarded for various advanced features, the highest being deferred shading. Having always wanted to try my hand at implementing it I went about researching the topic.

I implemented the project using my own engine I have been developing during my MSc written in C++ and utilising DirectX 11. The snow globe features deferred shading, particle systems, blending, PCF filtered shadow mapping, normal bump mapping, height mapping and environment mapping. The Snow Globe has a simple day/night cycle via two orbiting directional lights (Sun and Moon) and alternating summer/winter seasons. Summer nights = fireflies, winter nights = snow. Each firefly has a point light and using ‘deferred shading’, significant numbers of lights can be processed while maintaining good performance.

‘Deferred shading’, particular for non 3D programming experts, can be a rather tricky concept to grasp fully and so please find below my own attempt at describing what deferred shading is and why its a really cool technique.

Deferred Shading: Overview

‘Deferred Shading’ is a multi-pass rendering technique that has the distinct advantage of deferring the scene lighting to a second pass meaning put simply the calculation becomes one of a 2D domain rather then 3D. Usually with standard forward rendering, lighting is calculated in the pixel shader for every interpolated fragment after processing in the vertex shader. This means that every geometric object in your scene will be required to perform the lighting calculations which in ‘Big O’ notation looks like O(lights * meshes). The wonderful thing about deferred shading is that by using just one extra pass we can reduce that to O(lights + meshes) or to look at it another way in terms of fragments, we can reduce it from O(lights * geometryFragments) down to O(lights * screenFragments).

Deferred Shading - Sandy Snow Globe

Deferred Shading – Sandy Snow Globe

 

This has massive implications for performance. With forward rendering, more than half dozen or so light sources is enough to seriously impact performance, though modern games generally get away with this number by limiting how many are visible at a time. Deferred shading however as demonstrated in the above video can handle many times that amount of lights simultaneously with little performance impact. For the coursework I demonstrated a scene with 100 point lights which although pushed the GPU a little, still ran comfortably at over 30 FPS.

There are multiple deferred rendering techniques with ‘deferred shading’ being a 2 pass solution unlike ‘deferred lighting’ which introduces a third pass. Basically, for systems with lower GPU memory such as old-gen console hardware, ‘deferred lighting’ is preferable since it allows the size of the ‘G-buffer’ to be smaller because of the extra pass. Deferred shading is a simpler and more elegant solution but does require a larger ‘G-buffer’ and hence is better suited for ‘beefier’ GPU hardware.

DesertGlobe2

How it Works?

Deferred shading works as described using two separate rendering passes. The first pass is called the ‘geometry pass’ and works similar to a normal pixel shader carried out in forward rendering, except instead of outputting to the back buffer, we output to a selection of render targets, collectively referred to as the ‘G-buffer’. Each render target stores specific scene information so that once fed into the second ‘lighting pass’ the correct lighting calculations can be performed. Exactly what information you store in the ‘G-Buffer’ is fairly flexible although at a minimum you will require 3 buffers for colour data, normal data and preferably depth information. I say preferably because you could instead choose to store the 3D world position but this results in storing superfluous information since by using just the depth information we can reconstruct the 3D world position for each screen pixel later at a much cheaper memory cost (1 vs 3 floats per pixel).

As a bonus, when it comes to the ‘lighting pass’, you can further enhance performance by computing lighting on only the pixels that are effected by a particular light, by representing the light as a basic primitive based on its type. A full-screen quad for a directional light, a sphere for a point light and a cone for a spotlight.

DesertGlobe3

What’s the Catch?

Blending:

This brings us to the added complexity of deferred rendering. Because we effectively flatten the scene into 2D inside our buffers, we lose the depth information from the scene, meaning when it comes to blending operations such as those used in transparency, it’s hard to know in which order the scene should be arranged. There are however a few solutions to this including manually depth sorting your geometry and rendering in a ‘painters algorithm’ fashion, or even simpler, rendering your transparent objects in a separate forward rendering pass and blending, which is how I achieved the transparent snow globe.

Materials:

Because every object is encoded inside our ‘G-buffers’, any info about the scene that isn’t in there, the lighting pass will simply not know about. This presents a problem for geometric material properties because normally these would be passed into the shaders on a ‘per object’ basis via constant-buffers (DirectX), but because our ‘lighting-pass’ will only run ‘per light’ and not ‘per object’ we have no way of assigning the required material to the objects. One simple solution to this is to use a material ID value and throw this into one of the existing buffers like the colour buffer and then define a material array inside the lighting shader utilising the ID as an index.

Overall I’d implement deferred shading for any project in the future where time is not a concern as it does slightly complicate things but the benefits more than make up for this. If your game or 3D program doesn’t need more than a few lights then its not something that is strictly necessary however many modern games are already using deferred rendering techniques to enhance scene lighting. I’d also say if you can successfully implement deferred shading and understand the technique then you have gotten to grips with one of the more advanced multi-pass rendering techniques and this brings with it an enhanced understanding of the graphics pipeline.

 

Meshless Real-time Ray Tracing Demo Video

Featured

alt test

Meshless Real-time Ray Tracer

I was recently asked to put together a video showcasing my ray tracing project for the University of Hull to show some of the new Computer Science students starting this September. As detailed in my last post, ray tracing was the subject of my third year dissertation project and I have since been extending the project into real-time using DirectX 11, endeavouring hopefully to continue it as part of my MSc by creating a rendering program that can be used to design and produce complex implicit ray marched geometry through a simple UI interface.

The video unfortunately had to be recorded at 640×480 resolution to maintain good FPS due to my aging laptop GPU (around 4 years old now!). As a result, I recommend not viewing it in full-screen to avoid scaling ‘fuzziness’.

 

 

Scene Loading:

Recently I have been working on a scene loading system for it in preparation for implementing a UI with the ability to save and load created scenes. I developed a scene scripting format that allows simple definition of the various distance functions that make up a scene, along with material types and lighting properties. The scene loader parses a scene file and then procedurally generates the HLSL distance field code that will be executed in the pixel shader to render the scene. I’ve used a similar looking format to POVRay’s scene files.

Below is an example of one of my scene files showing a simple scene with a single sphere and plane with a single light :

#Scene Test
 
light
{
     position <-1.5, 3, -4.5>
}
 
sphere
{
     radius 1
     position <-2,1,0>
}
material
{
     diffuse <1,0,0,0.25>
     specular <1,1,1,25> 
}
 
plane
{
     normal <0,1,0>
}
material
{
     diffuse <0.5,1,0.5,0.5>
     specular <1,1,1,99> 
}

More complex operations such as blending can be represented in the scene file as follows:

blend
{
    threshold 1
    sphere
    {
        radius 1
        position <-2,1,0> 
    }    
    torus
    {
        radius <1, 0.44>
        position <2,1,0> 
    }
}
 

Due to the recursive nature in which I have implemented the parsing, it also allows me to nest blending operations like the following series of blended spheres, resulting in a single complex surface:
 

blend
{
     threshold 1
     blend
     {
          threshold 1
          blend
          {
               threshold 1
               sphere
               {
                    radius 1
                    position <-2,1,0>
               }
               sphere
               {
                    radius 1
                    position <2,1,0>
               }
          }
          sphere
          {
               radius 1
               position <0,2,0>
          }
     }
     sphere
     {
          radius 1
          position <0,1,-2>
     }
}
material
{
     diffuse <1,0,1,0.25>
     specular <1,1,1,25> 
}

For more complex scene featuring blending, twisting and domain repetition, an example scene file looks like this:

#Scene Test
 
light
{
     position <-1.5, 3, -4.5>
}
 
repeatBegin
{
     frequency <8.1,0,8.1>
}
 
twistY
{
     magnitude 0.04
     box
     {
          dimensions <1,4,1>
          position <0,3,0>
     }
}
material
{
     diffuse <1,0.5,0,0.1>
     specular <1,1,1,5> 
}
 
sphere
{
     radius 2
     position <0,9,0>
}
material
{
     diffuse <0,0.5,1,0.5>
     specular <1,1,1,30> 
}
 
repeatEnd
 
plane
{
     normal <0,1,0>
}
material
{
     diffuse <0.2,0.2,0.2,0.5>
     specular <1,1,1,99> 
}

Currently my scene files support spheres, cubes, tori and also a ‘Blob’ shape which takes any number of component spheres as parameters and blends them together. It also supports custom blending of the above shapes, domain twisting and repetition operations. Materials can be specified with both diffuse and specular components, with the 4th diffuse tuple representing reflectivity, and the 4th specular tuple representing shininess.

 

As the project develops, I’ll need to implement a way of creating custom distance functions that aren’t just template primitive shapes, but defined more generally to allow users to create surfaces using anchor points This will likely be a main focus for my masters dissertation if I take this topic.