Meshless Real-time Ray Tracing Demo Video

Featured

alt test

Meshless Real-time Ray Tracer

I was recently asked to put together a video showcasing my ray tracing project for the University of Hull to show some of the new Computer Science students starting this September. As detailed in my last post, ray tracing was the subject of my third year dissertation project and I have since been extending the project into real-time using DirectX 11, endeavouring hopefully to continue it as part of my MSc by creating a rendering program that can be used to design and produce complex implicit ray marched geometry through a simple UI interface.

The video unfortunately had to be recorded at 640×480 resolution to maintain good FPS due to my aging laptop GPU (around 4 years old now!). As a result, I recommend not viewing it in full-screen to avoid scaling ‘fuzziness’.

 

 

Scene Loading:

Recently I have been working on a scene loading system for it in preparation for implementing a UI with the ability to save and load created scenes. I developed a scene scripting format that allows simple definition of the various distance functions that make up a scene, along with material types and lighting properties. The scene loader parses a scene file and then procedurally generates the HLSL distance field code that will be executed in the pixel shader to render the scene. I’ve used a similar looking format to POVRay’s scene files.

Below is an example of one of my scene files showing a simple scene with a single sphere and plane with a single light :

#Scene Test
 
light
{
     position <-1.5, 3, -4.5>
}
 
sphere
{
     radius 1
     position <-2,1,0>
}
material
{
     diffuse <1,0,0,0.25>
     specular <1,1,1,25> 
}
 
plane
{
     normal <0,1,0>
}
material
{
     diffuse <0.5,1,0.5,0.5>
     specular <1,1,1,99> 
}

More complex operations such as blending can be represented in the scene file as follows:

blend
{
    threshold 1
    sphere
    {
        radius 1
        position <-2,1,0> 
    }    
    torus
    {
        radius <1, 0.44>
        position <2,1,0> 
    }
}
 

Due to the recursive nature in which I have implemented the parsing, it also allows me to nest blending operations like the following series of blended spheres, resulting in a single complex surface:
 

blend
{
     threshold 1
     blend
     {
          threshold 1
          blend
          {
               threshold 1
               sphere
               {
                    radius 1
                    position <-2,1,0>
               }
               sphere
               {
                    radius 1
                    position <2,1,0>
               }
          }
          sphere
          {
               radius 1
               position <0,2,0>
          }
     }
     sphere
     {
          radius 1
          position <0,1,-2>
     }
}
material
{
     diffuse <1,0,1,0.25>
     specular <1,1,1,25> 
}

For more complex scene featuring blending, twisting and domain repetition, an example scene file looks like this:

#Scene Test
 
light
{
     position <-1.5, 3, -4.5>
}
 
repeatBegin
{
     frequency <8.1,0,8.1>
}
 
twistY
{
     magnitude 0.04
     box
     {
          dimensions <1,4,1>
          position <0,3,0>
     }
}
material
{
     diffuse <1,0.5,0,0.1>
     specular <1,1,1,5> 
}
 
sphere
{
     radius 2
     position <0,9,0>
}
material
{
     diffuse <0,0.5,1,0.5>
     specular <1,1,1,30> 
}
 
repeatEnd
 
plane
{
     normal <0,1,0>
}
material
{
     diffuse <0.2,0.2,0.2,0.5>
     specular <1,1,1,99> 
}

Currently my scene files support spheres, cubes, tori and also a ‘Blob’ shape which takes any number of component spheres as parameters and blends them together. It also supports custom blending of the above shapes, domain twisting and repetition operations. Materials can be specified with both diffuse and specular components, with the 4th diffuse tuple representing reflectivity, and the 4th specular tuple representing shininess.

 

As the project develops, I’ll need to implement a way of creating custom distance functions that aren’t just template primitive shapes, but defined more generally to allow users to create surfaces using anchor points This will likely be a main focus for my masters dissertation if I take this topic.

 

CUDA Ray Tracer – Dissertation Project

Featured

After on and off work for a year, and many thousand words later, my final year BSc dissertation project and report was completed. Can a ray tracer ever be truly ‘complete’? This post a brief description and summary of my project.

A download to my full dissertation report can be found below and as well as a few renderings from my prototypes:

Report:

636 Downloads

Prototype Renderings:

The project from a personal point of view was an important one. It was a period where I gained heightened interest in graphics programming, gaining an understanding of the principles of computer graphics, the mathematics involved and also the creative satisfaction that comes from it. When creating realistic virtual graphics from essentially nothing but code, maths and a display, on the face of it, it’s very easy to gloss over the ‘magic’ of it all, especially when you understand the complexity of how we actually perceive the Universe and the shortcuts that must be taken for computers to accurately mimic the natural phenomena of our brain’s visual perception.

 

CUDA Ray Tracer - Dissertation Project

CUDA Ray Tracer – Dissertation Project

 

A Bit of Biology and Philosophy:

The modern computer when you think of it, is really just a primitive extension of our own bodies, simple enough that we can manipulate, manage and understand it, with much greater control and predictability then our biology. They allow us to achieve things we could not otherwise do and many of the components inside a computer carry out very similar roles to organs found within us. Of course we can think of the CPU as a brain, but what else? Going into more detail, the GPU could be seen as a specialised part of the brain engineered to handle visual computation, just as our brain has it’s own visual cortex. A virtual camera in a rendering program replicates the capabilities of part of our eye, defining an aperture or lens through which to calculate rays of light, and like-wise, an ‘image plane’ positioned in front of the camera, carries out essentially the same functionality as our retina, but using pixels to make up the visual image of what we see.

When you understand the detailed steps required to render something in 3D, you realise that we are essential trying to recreate our own little simplified universe, it’s a pretty profound concept that when taken much further, manifests itself in popular science fiction such as the Matrix. After all, is mathematics not simply the ‘code’ of our Universe? It’s perhaps not as silly as it may sound, when you get down to the fundamentals of game developers creating virtual worlds, graphics programming being an essential component, and looking just how real and immersive these worlds are starting to become.

So What Is Ray Tracing?

Ray Tracing

Of all popular rendering techniques, it’s ray tracing that perhaps stands out the most in respect to my previous comments above. We all know roughly how and why we see, where light rays shine from a light source such as our Sun, they travel millions of miles to get to us and out of all the infinite number of rays, the tiniest percentage may find it’s way directly into our eye. This could be from directly looking at the Sun (not recommended!), and also from scattered or reflected light that has hit a surface, finding it’s way on a collision course with our eye.
This is fundamentally close to how ray tracing works, but with important differences. If a computer had to calculate the trajectory of all possible rays been fired out from a light source, this would be impossible with modern hardware, there are just too many potential rays, of which, only an infinitesimally small amount would ever find there way into the camera (eye) of the scene, and it’s only these rays we are interested in anyway. Instead, and referred to as ‘Backwards Ray Tracing’, light is fired from the camera (eye) into the scene and then traced backwards as it is reflected, refracted or simply absorbed by whatever material it hits. We then only have to fire a ray from the camera for each pixel in the image, which is still potentially a considerable number of rays (1920×1080 = 2073600 primary rays) and that’s without counting all the secondary rays as light scatters throughout the scene, but at least this reduced number is quite feasible.

Still, it is ray tracing’s close semblance to how light interacts with us in the real world that makes it a very elegant and simple algorithm for rendering images, allowing for what is known as ‘physically based rendering’, where light is simulated to create realistic looking scenes with mathematically accurate shadows, caustics and more advanced features such as ‘global illumination’, something that other faster and more common rendering techniques like rasterization (pipeline-based) cannot do.

Illumination and Shading:

Phong Shading

Phong shading

The ultimate main job of firing the rays into a scene in the first place is to determine what colour the pixel in our image should be. This is found by looking at what a ray hits when fired into a 3D scene. Put simply, if it hits a red sphere, the pixel is set to red. We can define the material information for every object in the scene in similar fashion to how we know in the real world that a matt yellow box reflects light. Technically, the box is yellow because it reflects yellow light, and is matt (not shiny) because it has a microscopically uneven surface (diffuse) that scatters the light more evenly away from the surface. Compare this to light hitting a smooth (specular) surface, most of the light would bounce off the surface in the same direction and appear shiny to our eyes. Clearly, for computer graphics, we are not likely to program a surface material literally in such microscopic detail as to define if it is rough or smooth, but we can cheat using a popular and effective local illumination model such as Phong, essentially using the ‘normal’ of a surface, the directions of our light source and camera and some vector math to put it all together and calculate the colour of the surface based on it’s material and angle, creating a smooth shaded object rather than a ‘flat’ colour.

Intersections, Distance Functions and Ray Marching:

Implicit Functions

So we know why we need to fire the rays, but how do know a ray has hit a surface? There’s a few different ways this can be done, all down to the complexity of the geometry you’re trying to render. Ray intersections with simple shape such as planes or spheres can be calculated precisely using linear and quadratic equations respectively. Additionally, for complex explicit 3D models made from triangle mesh, linear algebra and vector math can also be used to compute the intersections.

Another technique, has been gaining popularity in recent years, despite been around quite some time in academic circles. Rendering complex implicit geometry using ‘distance functions’ with nothing but a pixel shader on your GPU as shown on websites like Shadertoy have popularised a subset of ray tracing called ‘ray marching’, requiring no 3D mesh models, vertices or even textures to produce startlingly realistic real-time 3D renderings. It is in fact, the very freedom from mesh constraints that is apparent when you observe the complex, organic and smooth ray marched geometry possible using the technique. Ray marching allows you to do things you simply cannot do using explicit mesh, such as blending surfaces seamlessly together, akin to sticking two lumps of clay together to form a more complicated object. Endless repetition of objects throughout a scene at little extra cost using simple modulus maths is another nifty trick allowing for infinite scenes. By manipulating the surface positions along cast rays, you can effectively transform your objects, twist, contort and even animate; it’s all good stuff.

The Dissertation Project:

My development project was comprised of two parts, a prototype phase to create a ray tracer using GPGPU techniques and a hefty report detailing the theory, implementation and outcomes. For those unfamiliar, General-purpose computing on graphics processing units (GPGPU) is a area of programming aimed at using the specialised hardware found in GPU’s to perform arithmetic tasks normally carried out by the CPU, and is widely used in supercomputing. Though the CPU hardware is singularly much more powerful than the processors in a GPU; GPU’s make up for it in sheer numbers, meaning they excel and outperform CPU’s when computing simple highly parallel tasks. Ray tracing, is one such highly parallel candidate that is well suited to GPGPU techniques and for my dissertation I was tasked to use NVIDIA’s GPGPU framework called CUDA to create an offline ray tracer, done from scratch using no existing graphics API. Offline rendering means not real-time, and is clearly unsuitable for games, yet is commonly used in 3D graphics industry for big budget animations like those by Pixar and DreamWorks, with each frame individually rendered to ultra high quality, sometimes over a period in excess of 24 hours for a single frame.

In the end I produced four different ray tracing prototypes for comparison, incorporating previously mentioned techniques. Prototype 1, running purely on a CPU single thread using simple implicit intersections of spheres and planes. Prototype 2, the same but implemented using a single CUDA kernel and running purely on the GPU across millions of threads. Prototype 3, a CPU ray marcher using distance functions to render more complex implicit geometry. Prototype 4, the same as 3, but implemented using CUDA. My aim for the project was to assess GPGPU performance and the rendering qualities of the ray marching technique, the findings of which can be found in the report.

I knew when I picked this project that I was not taking an easy topic by any stretch, and a great thing I can take away from this is the extensive research experience and planning needed to simultaneously implement many different difficult concepts I had no prior knowledge about, yet still managed to produce a cohesive project, and fully working prototypes, achieving an 88% mark for my efforts, which I am very pleased with. As expected, with heinsight there are things that I would do differently if repeated, but nothing too major, and really, it’s all part of the learning process.

Ray tracing, ray marching, GPGPU, CUDA, distance functions and implicit geometry were all concepts I had to pickup and learn. I bought some books, but in the end, research on the internet in the form of tutorials, blogs, academic papers and lectures proved more beneficial. Sometimes, it takes a certain kind of way to present the information for your brain to ‘click’ with certain principles, and all of us are different. The Internet is a treasure trove in this regard, if you spend the time, you can usually eventually find some explanation that will suit your grey matter, failing that, re-reading it a million times can sometimes help!

Future Plans:

On the back of this, I will be continuing this subject into my masters degree and will likely be pursuing this further during my masters dissertation. I am already busy at work on a real-time implicit render with UI functionality running in DirectX 11 (A couple of early screenshots above). Additionally, I’d love to get a chance to contribute to a research paper on the subject, but we’ll see.

I plan to make some easy to follow tutorials on implementing ray tracing and ray marching at some point for this website, when I get the chance. Hopefully, they could  help out other students or anyone else wanting to learn the aforementioned topics. I know first hand and from friends, that at times it can be frustrating since although there is theory out there, there is comparatively very little information on actual implementation details for the subject, when compared to say pipeline-based rendering.

My BSc in Computer Science – Results Summary

The past three years at the University of Hull have flown incredibly fast; A good sign, that I have thoroughly enjoyed my time there studying for my BSc in Computer Science with Games Development. In fact, it was probably one of the best decisions I ever made, despite how hard it was to take up the challenge as a 27 year old with commitments and nearly 10 years since prior academic study.

My plan will now be to continue on at Hull University to study a post-graduate MSc degree in Computer Science. Relocation and seeking employment will be on the cards aferwards, but I can rest assured having ‘put my all’ into the past several years, I am proud of the results I have acheived and I certainly never expected to do as well as I did, acheiving a First Class honours degree. Below is a summary of my results from the past three years:

Year 1

Module Mark Credit
Computer Systems 73 20
IT and Professional Skills 80 20
Programming 1 92 20
Programming 2 96 20
Quantitative Methods for Computing 87 20
Software Engineering and HCI 77 20
Year 1 average

Year 1 average

Year 2

Module Mark Credit
2D Graphics and User Interface Design 89 20
Advanced Programming 83 20
Artificial Intelligence 78 20
Networking and Games Architecture 88 20
Simulation and 3D Graphics 94 20
Systems Analysis, Design and Process 83 20
Year 2 average

Year 2 average

Year 3

Module Mark Credit
Commercial Games Development 81 20
Games Programming & Advanced Graphics 94 20
Mobile Devices and Applications 83 20
Visualization 86 20
Development Project 88 40
Year 3 average

Year 3 average

 

A ‘Mature’ Reflection:

To any people out there reading this who may fall into the mature student catagory of being a little older and thinking of studying a degree, I would say this; If you are passionate about the subject that you want to study, have proven your interest in it through personal projects, and can cope with the lower standard of living while you study, then go for it and don’t look back. It’s not just about career development, but also a time of personal acheivement and self discovery, where you can find much about your own abilities that perhaps you never knew you had. I think many people can muddle on in life not knowing if they would be any good at ‘this’ or ‘that’. A formal degree can help answer this, giving you confidence in that discipline, which can be it’s own reward. When you realise that generally speaking, unless your lucky enough to be the next Einstein, people achieve great things not through raw intellect or genius, but ‘hard work’ and effort. In this regards, mature students probably have a motivational advantage, since they have more to lose, less time to dawdle and life experience to help them focus.

 

Halloween Pumpkin’s – GLSL Programming

 

For the Advanced Graphics module as part of my BSc in Computer Science, we were tasked to create a 3D scene with a theme of a ‘Halloween Pumpkin Party’. The scene was produced using RenderMonkey and programmed via GLSL vertex and fragment shaders.

The scene displays a variety of shader effects including: Cube mapping, displacement mapping, height bump-mapping, parallax bump-mapping, fragment based-lighting, particle systems, texture bill boarding, smooth-step vertex transformations and stencil masks.

Below is a brief description of each component of the scene and how it was implemented.

Enviroment

Cube Mapped Skybox

I created a new cube map using several textures by creating a DDS file using the ‘DirectX Texture Tool’. The cube map was then applied onto a cube model in RenderMonkey.

Terrain Displacement Map and Height Map

Terrain Displacement Map

Terrain Displacement Map

The terrain features texture displacement mapping, a height bump map and fragment lighting. It was made using a single tessellated plane with a terrain texture. In the vertex shader I displaced each vertex along its normal using the texture colour values. I applied a uniform coefficient to control scaling.

A separate texture is used for bump mapping to create a grass effect. The height map was done by transforming the view direction and light direction into tangent space via a matrix. In the fragment shader, I retrieved the height map data, calculated the difference between two pixel samples and determined the normal for each fragment. All other objects that use height bump maps in the scene are done the same way.

Dispersed Fog Particle System

Fog Particles

Fog Particles

The fog is implemented using a particle system and quad array. A time coefficient is first calculated and then another coefficient used to progressively spread the particles apart from each other. Each quad in the system is ‘bill boarded’ to always face the view, which is achieved using the inverse view matrix. The fog colour transitions across the texture by decrementing it’s coordinate using the timer resulting in multi hued particles. A smooth fade is added around the edge of each quad to help it blend better. By increasing the size of the particles, lowering the speed and extending the particle system range, I created the above effect.

Fireworks Particle System

Firework Particle System

Firework Particle System

The fireworks use the same principles as the fog except using a different algorithm. All particles start on top of each other, ascend into the air, and then spread apart, slowly drifting down. This is achieved by setting an initial velocity, it then checks if each particle is below the explosion threshold. If it is, it increments the particles with positive velocity. If not, it decrements the particle by the negative velocity and spreads them apart over time.The particles slowly fall back down.

Pumpkins

Pumpkin 1

Cube Mapped Pumpkin

Cube Mapped Pumpkin

Features:

  1. Cube mapped.

Each fragment is coloured using a reflection vector to access the texture data from the cube. The shape is a 3D model.

Pumpkin 2

Parallax Bump-mapped Pumpkin

Parallax Bump-mapped Pumpkin

Features:

  1. Parallax Bump Mapping (normalheight map)
  2. Non-uniform vertex transformation light flickering.
  3. Flame bill board.
  4. Fragment lighting.
  5. 3D model used.

The parallax bump-mapping gives a nice bumpy surface using a simple brick texture. The is effect achieved in the fragment shader by retrieving the normal and height texture data and then correcting the texture coordinate.

I created a nice lighting effect to simulate flickering flame light. It works by displacing the normal slightly based on a sine function. This is done on all flame pumpkins.

Flame

Flame billboard

The pumpkin flame is created using 3 different textures, a shape , colour and a noise layer. The vertex shader billboards the quad and in the fragment shader, the shape layers are animated and transformed.

Pumpkin 3

Stencil-masked Spherical Pumpkin

Stencil-masked Spherical Pumpkin

Features:

  1. Stencil masked cut-out holes.
  2. Smooth step transformation from a sphere. Top is removed.
  3. Height Bump Mapping.
  4. Non-uniform vertex transformation (breathing, veins swelling, light flickering).
  5. Flame bill board.
  6. Fragment lighting.

The face is made using holes that are cut out using a simple face texture as a stencil mask and then discarding fragments. The pumpkin shape is made from a basic sphere that has been stretched and the top removed in the shader.

A breathing effect has been added where the veins on the texture swell when the pumpkin exhales, this is achieved by applying a sine function to the bump normal. The breathing is done using a ‘smooth step’ sine function on the lower vertices.

Pumpkin 4:

Glowing Pumpkin

Glowing Pumpkin

Features:

  1. Glowing eye and mouth holes via blended billboard.
  2. Glowing aura via billboard texture.
  3. Non-uniform vertex transformation light flickering.
  4. Fragment lighting.
  5. 3D model used.

The glowing eyes and mouth are made using separate passes. It is done by bill boarding a texture and blending it over the holes. A direction is calculated so that it only glows when it’s looking at the camera.

Pumpkin 5

Transformed an displaced pumpkin from teapot model

Transformed and displaced pumpkin from teapot model

Features:

  1. Smooth step transformation from a teapot. Handle and spout translated inside.
  2. Wings extruded via smooth step and animated.
  3. Displacement mapped spikes.
  4. Hovering animation.
  5. Height Bump mapped fur.
  6. Fragment lighting.

Shape is made by translating the spout and handle vertices inside the pot. The wings are extruded via smooth step to make them curved. The spikes are made by deforming the vertices along the normal based on a texture. The hovering is done by applying a sine and cosine function to the vertices x and z components, the wings are similarly animated.

Gravestones

GravestoneSimple 3D models featuring bump-mapping and fragment lighting.

Summary

The project was challenging and very fun to work on, allowing me to learn many different shader rendering techniques and effects that are a staple in modern graphics and games programming. Using RenderMonkey allowed focus to be directly on shader programming and not the OpenGL framework i.e handling model loading and vertex buffers etc, which made sense considering the limited allocated time for the coursework. I was also very pleased to have received a mark of 90%!

Final Scene

Final Scene

Dungeon Master – An Iconic RPG

Box Art

Box Art

Aged probably no more than 6, I looked on in excitement and fear at the Amiga monitor. My parents were playing Dungeon Master again, it’s labyrinthine dungeons, fiendish puzzles, stunning graphics (for the time) and always death, waiting around the next corner.

Dungeon Master was a pivotal game of my childhood, it taught me how real and immersive games could actually get, despite computer limitations. Using a 2D perspective trick, it could render a seemingly 3D environment as if seen from the eyes of the player. This of course was an illusion, but it did it so effectively, that it stood out back then with hugely impressive visuals. It wasn’t just nice to look at though; featuring groundbreaking level design and puzzle concepts, being brutally difficult but still rewarding; there was something about it that left a lasting impression on you. It was a little like the Dark Souls of it’s day.

Although there had been other well known ‘dungeon crawler’ games (as they came to be known) like Bard’s Tale and Wizardry, it was DM that really culminated the best attributes of the genre, distilling it into what is in my opinion the best of the lot, even to this day. It’s no coincidence that Almost Human’s ‘Legend of Grimrock’ in 2012, cited Dungeon Master as large inspiration and something that is clearly evident having finished Grimrock and noticing many ‘tips of the hat’ to DM’s puzzles, mechanics and creatures. All those puzzles of putting an item on a pressure plate to close a pit, or placing a torch in a wall sconce to open a secret door hearken back to this era.

DM was in fact the largest selling title of all time for the Atari ST, whose version differed only mildly from that of the Amiga, with the latter featuring improved 3D sound effects where most noticeably, you can hear creatures moving around with unnerving effect.

Using the free Amiga emulator WinUAE the past week, I have finally finished Dungeon Master after all these years. I loved every second of it, scarily so, because I was telling myself continually throughout, “why am I playing a 27 year old game in this day and age?”. Irrespective of the answer, I had more fun playing it then most state of the art games I have played recently! Why? Well, many reasons, the challenge and immersion are two, but ultimately, I guess I’m a pretty hardcore gamer and there’s just something about playing old school classic RPG’s, a charm or ambiance if you like, akin to rolling that dice in a pen and paper D&D game. I’m sure many can empathize with that.

A pack of skeletons.

A pack of skeletons.

Dungeon Master does have a story and plot, though sparse and not a driving force for the progression of the game. It revolves around having to descend into the depth of the mysterious dungeon and find an artifact known as the ‘Firestaff’, as tasked by your master ‘Lord Order’. Ultimately, if your party survive the horrors long enough, you come across writings detailing the evils that will occur should you complete this quest and instead come to realise that you must descend to the deepest depths of the dungeon, combine the staff with the ‘power gem’ and defeat ‘Lord Chaos’ (think Sauron), restoring ‘Balance’ to the world.

You start the journey in the ‘Hall of Champions’, a place at the start of the dungeon where you can look upon windows on the walls and see magically suspended heroes, whom you can either ‘resurrect’ or ‘reincarnate’ to join your party, up to a total of four party members. Resurrection ensures the character maintains it’s identity, combat skills and experiences whereas reincarnation allows you to rename the character, forfeiting their skill set, but gifting them enhanced physical attributes so to enhance learning and allow you to shape the character as you see fit. Ultimately, the tried and tested composition of two fighters at the front, a priest and a wizard at the back worked wonders for my play-through, though having four ‘jack-of-all-trades’ is viable too. As 2012’s Grimrock, the party moves through the dungeon in 1st-person view in a 2 by 2 formation, meaning that only the two members of your party at the front can reach enemies with melee weapons, with the back two having to rely on ranged, throwing weapons and spell casting. Consequently, only the front two will take damage from the front, and if a ‘baddie’ creeps up behind you, your squishy casters won’t be very happy. Part of the the games meta strategy, involves you being able to change your players around in the formation at any time e.g if your front fighters get wounded, you can swap them out with the back.

Character Inventory.

Character Inventory.

The predominate theme of the game is undoubtedly ‘survival’. Staying alive is really, really not an easy thing unless you have learned the tricks and techniques generally gained after many horrible deaths, whether that be to the jaws of giant worms, starvation or plummeting down a pit arriving several levels lower than you could possibly hope to deal with. The only items you have at your disposal are those you find along the way, and that way is strewn with illusory walls, guarded chests, locked doors and secret passages that without consulting a guide or a printed map, you have little chance of ever finding yourself (hand holding pfff who needs it?). Even basic concepts we all take for granted in games today such as being able to SEE, is a premeditated game mechanic in dungeon master, where the dungeons are pitch black without a light source and torches are scarce, making the use of a wizard or others with the skills to cast ‘light’ spells essential.

One of the  most memorable mechanics which is still pretty innovative today is the magic system. To the right side of the screen are a bunch of runic symbols. The boxed game’s manual documented an alphabet of these symbols describing there purpose. As you cast a spell you first choose a rune representing the ‘power’ of the spell, would you cast a short duration spell or a potent offensive spell for instance? Then sequentially you chose the spells ‘elemental influence’, ‘form’ and ‘alignment’. It all sounds rather complicated but when you know off by heart that a weak fireball is LO FUL IR and a potent healing potion is PAL VI, it starts to become second nature, especially when you realise you can drop the power level if your priest is low on mana and make a weaker healing potion like LO VI for instance. In combat, you would be expected to click these runes in the correct order at the heat of the moment, you soon realised that if you didn’t ‘get gud’ and memorise them, then you simply got ‘dead’. In a funny kind of way, it really did feel like you were learning magic and having to go through the motions to learn and cast the spells your party depended on and I love that.

These runes are used to cast all the games many Wizard and Priest spells.

These runes are used to cast all the games Wizard and Priest spells.

Other mechanics such as food and drink meters for each character really puts that hanging dread over your party for the entirety of the game, since you never know when your next meal is coming up and when you’ll see a fountain again to refill your water skins. Realising your lost deep somewhere with no water left and down to your last couple of hunks of meat is pretty terrifying. Luckily though, some of the critters are edible if you can kill them, ‘Screamer Slice’, ‘Worm Round’ or ‘Dragon Steak’ anyone? Yum!

A water fountain, always a welcome sight!

A water fountain, always a welcome sight!

Having finally finished the game after all these years, I felt an immense sense of accomplishment because it’s a game that I grew up thinking was simply too tough for me to contend with, and to be fair, me being less than 10, it probably was! The end showdown with Lord Chaos is no simple matter. Once you have collected all the ‘Ra keys’, broken into the vault of the Firestaff, defeated it’s Stone Golem guardians and retrieved it, you then have to descend to the last level, defeat a wingless dragon and free the power gem with a spell you better have learned along the way or your buggered! (*cough* Google). You then must combine the staff with the gem, creating an ultimate weapon and then go back up a level and find Lord Chaos. Using the staff’s power you must surround him with ‘Fluxcages’ and finally ‘Fuse’ him to restore ‘Lord Balance’ and beat the game. If that sounds straightforward, it really isn’t, especially considering even if you do figure this all out on your own from a subtle hint in an very well hidden scroll, you have to do all this while being attacked by demons, black flame elementals and Chaos himself flinging fireballs at your face!

Defeating Lord Chaos and restoring Balance.

Defeating Lord Chaos and restoring Balance.

I’m currently now playing through it’s sequel ‘Chaos Strikes Back’ (Yes…it really IS called that) and it is unbelievably hard, as in Dark Souls has nothing, not a bean compared to this in terms of difficulty. CSM will eat you alive and then spit out your regurgitated remains for a second helping. Firstly, the sequel starts at the same difficultly level that DM ends at. You can import your characters which you may think will help and sure enough it does a little, but little prepares you for the first 10 seconds of the game which pretty much goes like this:

“Ok, let’s go, hmm it’s pitch black…where am I? My party is naked with no weapons…I can hear things moving around me…let me cast a light spell. That’s better! SHIT there’s four armoured worms in here with me and no exits…no wait, SIX worms…EIGHT….I’m surrounded…can’t move….DEAD.”

That’s your first taste of Chaos Strikes Back, shoved into an infested pit of worms with no weapons and no obvious way out, but like Dark Souls, I still love it.

As reported in 2012 in a Rock, Paper, Shotgun article here, one chap amazingly spent 6 months, eight hours a day of his own time, programming 120,000 lines of code to port the Atari ST version, creating a C++ executable version that runs today on any modern PC. It can be found here free: Chaos Strikes Back for Windows (and Linux, MacOS X, Pocket PC)

For those used to emulators, by getting hold of the Amiga .adf ROM file (basically an image of the game disk), you can run it in WinUAE (my personal preference for the better sounds) but ultimately, only ex-Amiga junkies would likely do this over the ported PC version :D.

Dungeon Master is a truly iconic game that has undoubtedly influenced many great games, not just across the dungeon-crawler and RPG genres like the classic ‘Eye of the Beholder’ series or recently the ‘Legend of Grimrock’, but also modern popular AAA titles such as the Elder Scrolls. It’s a testament to it’s influence that the game still has it’s own updated Encyclopedia site: http://dmweb.free.fr/  and even an online message forum with an active and thriving community: http://www.dungeon-master.com/forum/.

I would encourage anyone who is curious about classic RPG’s games, interested in why modern games are like they are and all that jazz to check out old titles like Dungeon Master, because although the graphics leave much to be desired by today’s standards, the game play is still truly as good as it ever was. It’s clear there is still much to wonder and marvel at, in both game design and execution in this old gem.

My party in the hall of fame after beating the game!

My party in the hall of fame after beating the game!

 

OpenGL Cross-platform PC/PSP Game Coursework

Last semester as part of the Advanced Graphics module of my CS degree at Hull University, we were tasked with a group project to produce a cross-platform OpenGL mini-game for the PC and Sony PSP based on a specification. The game premise was to move around a 3D ‘maze’ consisting of four rooms and connecting corridors, avoiding a patrolling AI that would shoot you if within its line of sight. The objective was to collect 3 keys to activate a portal to escape and beat the game.

The groups were selected at complete random with 4 members. As per usual, group coursework assignments are particularly difficult due to the extra concerns of motivating members and assigning work and by year 3 of University, you get a good idea on the best way of operating within them to secure good grades. I went in with the mindset of doing as much work as possible after we assigned tasks. Hopefully each would carry out their allocated work, if not, I’d just go ahead and do it, no fuss. Luckily one chap in my group was a friend and he did an excellent job coding the AI, mini-map and sound while I worked on coding the geometry, camera, lighting and player functionality etc.

1

Mini-maze model

Static environment lighting

Static environment lighting

Cross-Platform Limitations:

Having worked with OpenGL and shaders last year for my 3D ‘The Column‘ project, it was some-what limiting when I realised that the PSP didn’t support them and that fragment-based lighting was a no go. With one requirement of the game being a torchlight effect that illuminated the geometry, this would therefore mean that for PSP compatibility, vertex-based lighting would need to be implemented and that meant tessellation of primitives to prevent the lighting looking very blocky and…well very 90’s. Luckily the PSP did atleast have support for VBO (Vertex Buffer Objects) which meant effectively each tessellated model could loaded onto the graphics card only once to improve performance.

Unified Code

An interesting aspect of this project was the required consideration for a consolidated code-base that where possible allowed shared functionality for both the PC and PSP platforms i.e limiting how much platform specific code was used. This was essential since the game would be a single C++ Solution for both platforms.

I designed the code structure based around principles Darren McKie (the course lecturer) described, and produced the following class diagram that reflects the final structure:

Unified Cross-platform Class Diagram

Unified Cross-platform Class Diagram

The majority of game code resides in ‘Common Code’ classes that are instantiated by each particular platform ‘Game’ object. Certain code such as API rendering calls were kept platform specific but made use of the common classes where necessary. A particular nice way of ensuring the correct platform specific object was instantiated was carried out using ‘#Ifdef’, ‘#ifndef’ preprocessor statements and handled by a ‘ResourceManager’ class.

As mentioned earlier, per-vertex lighting had to be implemented due to PSP compatibility. A primitive with a low number of vertices would thus result in very blocky lighting. To prevent this I created a tessellation function that subdivided each primitives vertices into many more triangles. I played around with the tessellation depth to find how many iterations of subdivision could be achieved before inducing lag and was very happy with the lighting result considering there is no fragment shader; a given for today’s modern pipelined-based rendering.

Active Portal

Active Portal

The PSP implementation proved more tricky due to getting to grips with the PSP SDK and having access to very little documentation, however the game was successfully implemented onto a PSP device and ran with decent performance after compressing the textures down and removing geometry tessellation to allow for the PSP’s limited memory capacity.

The game was written in C++ and  the following libraries and software were used:

  • GXBase OpenGL API
  • Sony PSP SDK
  • OpenAL
  • Visual Studio 2012
  • Paint .Net

Solar System Orrery – HTML5 Canvas

Featured

Orrery Zoom

For quite a while I’ve been trying to get around to arranging some web hosting and putting my solar system Orrery online for people to access, I’m pleased to say I’ve finally got around to doing it.

(Click here to go to the Interactive Orrery)

The project was part of the 2D Graphics module course work for my Computer Science degree. It’s written in Javascript and utilizes the powerful HTML5 canvas for rendering.

It’s not an accurate scientific representation, however the planets distances are to scale in relation to each other (not in relation to the sun) and the frequency each planet completes a full orbit (year) is also accurate to real life. There are two orbit modes ‘circular’ and ‘elliptical’ and also two simulation modes where acceleration and velocity is calculated based on the mass each object and thus the force of gravity. One simulation mode keeps the Sun centered while the planets orbit around, the second mode allows the sun to be affected by it’s orbiting bodies.

elliptical

It’s really a bit of fun and you can create new planets of enormous size by simply holding down your mouse on the simulation until your happy with the size and let go and watch how all orbiting bodies are affected. You can also flick the planet when your holding it at the same time of release to set its starting velocity (seems to work much better in chrome then IE). I also highly recommend running it in full-screen mode by pressing ‘W’ if you have a reasonable spec system.

Another cool thing is the zoom feature, if you pause the program via ‘P’ you can scroll around with the cursor keys and take a look at some of the relatively hi-res images I used for each planet. The Earth and orbiting Moon is pretty cool to zoom right into as pictured above.

Detailed instructions are available on the page. Please check it out here and have a play around: www.alexrodgers.co.uk/orrery

simulation

Exchange Reports Project Overview

During this summer, in-between semesters I was fortunate enough to get a software development job for a local company just 10 minutes walk from my door. The project was to produce an ‘Exchange Reports’ system that would provide email messaging statistics exactly to the customers specification. The system would be automated so that after reports were designed, they would be generated programmatically by a service and emailed to any recipients that had been setup to receive each report. The solution was to be comprised of 3 distinct programs that would need to be developed along with configuration tools to setup the non-GUI processes in the solution (namely the services).

I have produced the following diagram to demonstrate the solutions processes, ( Click to enlarge):

The design was in place when I started and an existing code-base was also present, but still required the vast majority of the functionality to be added. It was the first time having worked professionally as a software engineer and therefore also the first time getting to grips with existing code made by developers no longer around. More so, understanding the solutions technical proposal well enough to execute exactly what the customer and my employer wanted. I think working in IT professionally for a lot of years certainly helped me get into a comfortable stride after an initial information overload when taking on solely what was a surprisingly large but beneficial technical project compared to what I had envisioned. Being thrown into the deep end is probably the fastest way you can improve and I feel that above all, I have taken a lot from this experience which will prove valuable in the future. I’m very pleased with the outcome and successfully got all the core functionality in and finished in the time frame that was assigned. I whole heartily would encourage students thinking of getting professional experience to go for it, ideally with an established company from which you can learn a great deal. Having experienced developers around to run things by is great way to improve.

Now onto the technical details. The project was coded in C# and used WinForms for initial testing of processes and later for the configuration programs. I used a set of third-party .NET development tools from ‘DevExpress’ that proved to be fantastic and a massive boon to those wanting to create quick and great looking UI’s with reporting functionality. SQL Server provided the relational database functionality, an experience I found very positive and very much enjoyed the power of Query Language when it came to manipulating data via .NET data tables, data adapters, table joins or just simple direct commands.

Using the diagram as a reference, I’ll briefly go through each process in the solution for A) those interested in such things and B) future reference for myself while it’s still fresh in my mind because i’ll likely forget much of how the system works after a few months of 3D graphics programming and Uni coursework :P.

Exchange Message Logs: 

In Exchange 2010 Message Tracking logs can be enabled quite simply and provide a wealth of information that can be used for analysis and reporting if so desired. They come in the form of comma delimited log files that can be opened simply with a text editor. They have been around a lot of years and in the past during IT support work I have found myself looking at them from time to time to diagnose various issues. This time I’d be using them as the source of data for a whole reporting system. The customer was a large international company and to give an example from just one Exchange system they were producing 40 MB-worth of these messaging logs each day. With these being effectively just text files that’s an awful lot of email data to deal with.

Processing Service: 

The first of 3 core components of the solution, the Processing Service as the name suggests is an install-able Windows Service that resides on a server with access to the Exchange Messaging log files. The service is coded to run daily at a specified time and it’s purpose is comprised of 5 stages:

1. Connect to the Exchange server and retrieve a list of users from the Global Address List (GAL). This is done using a third-party Outlook library called ‘Redemption’ that enables this information to be extracted and then check it for any changes to existing users and/or any new users. The users are placed in a table on the SQL database server and will be used later to provide full name and department information for each email message we store.

2. Next, each Exchange Message log is individually parsed and useful messaging information is extracted and stored into various tables on the database server. Parsed log file names are kept track of in the database  to prevent reading logs more than once.

3. Any message forwards or replies are identified and tallied up.

4. A separate Summary table on the database is populated with data processed from the prior mentioned message tables. This table is what the reports will look at to generate data. Various calculations are made such as time difference between an email being received and then forwarded or replied to gauge estimates of response times being just one example; a whole plethora of fields are populated in this table, much more than could comfortably fit on a single report. Due to this large amount of potentially desirable data we later allow the user to select which fields they want from the Summary table in the ‘Report Manager’ if they wish to create a custom report or alternatively and more typically, they use predefined database ‘Views’ that have been created for them based on the customers specification which allows them to access only the data they need. Database Views are a really neat feature.

5. The databases Messaging tables are scoured for old records beyond a threshold period and deleted. This maintenance is essential to prevent table sizes growing too large. Their associated Summary data that has been generated is still kept however but I added functionality to archive this by serializing this data off and deleting it from the database if required.

Report Manager:

Initially we had thought to utilise DevExpress’s ‘Data Grid’ object controls in a custom Form application but we decided that the appearance of the reports that were generated from this were not satisfactory. This turned out to be a good design decision since we later discovered DevExpress has remarkable reporting controls that allow very powerful design and presentation features that completely overshadowed that of the Data Grids. After some migrating of code from the old ‘Report Manager’ program and having to spend a day or two researching and familiarising myself with the DevExpress API I had a great looking new application that the customer will be using to design and manage the reports.

Report Manager program

Report Manager program

The Report Manager allows you to design every aspect of a report through an intuitive drag and drop interface. Images and various graphics can also be added to beautify the design, though that wasn’t something I did nor had the time to attempt! The data objects can be arranged as desired and the ‘data source’ information for the report is saved along with it’s design layout via a neat serialization function inherent to the ‘XtraReport’ object in the DevExpress library which is then stored in a reports table on the database server for later loading or building. You can also generate the report on-the-fly and export it into various formats such as PDF or simply print it. Another neat built-in feature is the ability to issue SQL query commands using a user-friendly filter for non-developers in the report designer which is then stored along with the layout, thus the user designing the report has absolute control over the data i.e a quick filter based on Department being “Customer Services” would return only that related message data without me needing to code in some method to do this manually like was the case when using the Data Grids.

In the top left you’ll see specific icons that provide the necessary plumbing for the database server. ‘Save’, ‘Save As’ and ‘Load’ respectively writes the serialized report layout to the database, creates a new record with said layout or loads an existing saved report from the database into the designer. Loading is achieved by retrieving the list of report records stored in the reports table and placing it into a Data Grid control on a form where you can select a report to load or delete. The ‘Recipients’ button brings up the interface to manage adding users who want to receive the report by email, this retrieves the user data imported by the Processing Service and populates a control that allows you to search through and select a user or manually type a name and email address to add a custom recipient. Additionally, upon adding a recipient to the report you must select whether they wish to receive the report on a daily, weekly or monthly basis. This information is then stored in the aptly named recipient table and then relates to the reports via a reportID field.

Report Service:

Nearly there (if you’ve made it this far well done), the last piece in the solution is another Windows Service called the ‘Report Service’. This program sits and waits to run as per a schedule that can be determined by a configuration app that i’ll mention shortly. Like the Processing Service, as part of it’s logic, it needs to check if it’s the right time of the day to execute the program, of course the service continuously polls itself every few minutes to see if this is the case. Upon running it looks to see if it’s the right day for daily reports, day of week for weekly reports, or day of month for the (you guessed it) monthly reports. If it is, it then it runs and grabs the ‘joined’ data from the reports and recipient tables and proceeds to build each report and fire them out as PDF email attachments to the associated recipients. It makes a final note of the last time it ran to prevent it repeatedly running on each valid day.

Configuration Tools:

Two configuration apps were made, one for the Processing Service and one for the Report Service. These two services have no interfaces since they run silently in the background, so I provided a method via an XML settings file and the two apps to store a variety of important data such as SQL connection strings, server authentication details (encrypted) and additionally also through the need to provide certain manual debugging options that may need to be executed as well as providing an interface to set both services run times and the report delivery schedule.

Screens below (click to enlarge):

So that’s the solution start to finish, depending on time I’m told it’s possible it could be turned into a product at some point which would be great since other customers could potentially benefit from it too.

The great thing about a creative industry like programming, whether business or games, is that you’re ultimately creating a product for someone to use. It’s nice to know people somewhere will be getting use and function out of something you have made and just one reason why I’ve thoroughly enjoyed working on the project. I’ve learned a lot from my colleagues while working on it and hope to work with them again. You also get a taste for real life professional development and how it differs in various ways to academic teachings, which although are very logical and sensible are also idealistic (and rightly so) but in the real-world when time is money and you need to turn around projects to sustain the ebb and flow of business, you have to do things in a realistic fashion that might mean cutting some corners when it comes to programming or software design disciplines. I always try my best to write as clean code as possible and this was no exception but ultimately you need to the get the project done first and foremost and it’s interesting how that can alter the way software development pans out with regards perhaps to niceties like extensive documentation, ‘Use Case’ diagrams and robust unit testing potentially falling to to the wayside in favor of a more speedy short-term turn around. Certainly I imagine, larger businesses can afford to manage these extra processes to great effect, but for small teams of developers it’s not always realistic, which I can now understand.

The Column: 3D Graphics Simulation

Featured

As the single fully weighted piece of work for the 3D Graphics module during my second year of my Computer Science degree at Hull University I had to create an OpenGL graphics simulation. Despite having had little prior experience of using 3D graphics frameworks, I am very pleased with the outcome and look forward to continuing to spend a lot more time with both the OpenGL and DirectX API’s; in particular my final year project looks to be a ray-tracing renderer (potentially CUDA) which should give me additional exposure to what is becoming a more and more promising technology for gaming.

I created a report accompanying the finished program which I’ll simply include bits of below to explain the project and how the simulation works.

The Column

The Column

The Column is a 3D graphics simulation designed around a series of stacked boxes containing cylinders. Balls are emitted at the top of the stack and interact with both the geometry and each other via way of collisions and response. In addition, the simulation features a “Sphere of Doom”, a large sphere near the bottom of the stack that absorbs balls, shrinking their size and mass. A portal lies at the bottom of the stack that transports any balls that enter, back to the top of the column. The entire simulation is made using OpenTK (OpenGL) in C#. All geometry and physics are rendered mathematically.

The specification determined that one emitter should emit balls with the approx density of aluminium, the second one, copper and the third, gold.

The program simulates a dynamic system through various means. The balls use an Euler integration method with a gravitational constant that combined with calculated velocity, mass and density of each ball, simulates the motion of the balls falling down the column.

Ball to ball collision response is handled via “elastic collisions” based on the mass of balls and perpendicular velocities from the collision point, thus a heavier ball will knock a lighter ball out of the way. Additionally the angle of impact effects the amount of force transferred.

Rendering is performed via OpenGL using version 3.1 and Vertex Buffer objects. All primitive 3D models have been constructed manually or mathematically. I use GLSL vertex and fragment shaders for “Phong Shading” based ambient, diffuse and specular lighting calculations that provide interpolated lighting of geometry between vertices. My scene uses 3 point light sources and has built in support for both directional and spot lights if desired.

I have implemented a particle system object that emits particles of a given shape. I have used simple quad planes for the simulation for performance optimisation and rotate them for added effect combined with the lighting. The particles are highly customisable in lifetime, movement, scale and quantity and can be added for any desired event. I use them specifically for collisions with the Sphere of Doom and upon spawning of balls from emitters.

My portals use a Frame Buffer Object which renders the scene from the desired camera position to a texture. I then switch to the Display Frame Buffer and render the entrance and exit portals using the respective textures to give the effect of seeing through the portals to their destination, which in turn is updated in real-time.

Bottom-Up

I have spent considerable time optimising the simulation to maximise the overall frame rate. Much of this has been achieved by streamlining the shader structure to avoid dynamic branching, specifically with the avoidance of “IF” statements , the use of step functions and moving as many calculations as possible to the vertex shader. The fragment lighting calculations are easily the most intensive part of the simulation and reducing my lights to a maximum of 3 per fragment has also helped greatly.

With a simulation such as this, there is always something that could be improved on, tweaked, optimised or added. Suffice to say I am very satisfied however with the quality of the finished product which has more than surpassed my initial expectations and I feel I have learned very useful and contemporary skills that will be essential for the future. Perhaps most importantly, I have thoroughly enjoyed the assignment.

I’ll get a video uploaded of it in motion at some point. I’m currently looking at improving my portals a bit by potentially using an asymmetric frustrum.

Sphere of Doom

The “dumbing down”of the games industry

Technology has moved on in the games industry, that’s for certain. Hardware, programming languages and business processes have all improved i’m sure many would agree, but does the Nth fold increase in technology also translate 1-to-1 to game play and design?

I’ve been thinking a lot about that question and I’d first like to set some context by going back to a time before PC gaming was conceived or even the first 90’s era consoles were around to change the demographic of the average games consumer forever. The days of the Commodore Amiga in fact is what I want to go back to, an era that few under the age of 25 will have ever experienced during it’s peak. The Amiga i’m confident in saying was massively ahead of it’s time in terms of hardware and gaming innovation, and not just a little bit. Built on top of the great success of it’s precursor the Commodore 64, it’s perhaps unsurprising why the system has such a mythical “stuff of dreams” status now, like did it really happen or was it just my imagination?

A Past Era:

Launched in 1985 (Amiga 1000), specs wise it featured an 8-bit 4 channel stereo sound chip, CPU co-processors (unheard of at the time) and graphics capable of up to 4096 colours at a max resolution of 640×512. These specs were incredible and it took other systems such as the NES or PC DOS gaming over 7 years to get on par with the Amiga. Now it’s all good listing specs but lets put that into perspective by comparing with another system of the day:

Shadow of the Beast – Amiga – 1989

Ninja Gaiden 2 – 1990 – NES

For reasons like the comparison above, it’s startling to me that so few gamers today have perhaps even heard of the Amiga, and strange how the NES and Sega Master System shook the world of gaming forever when they arrived despite being hugely inferior. To me as a kid in the early 90’s, I looked at the NES and thought…whats the big deal, I’ve been playing better looking and sounding games then that for years!  Shrugged my shoulders and went back to playing my dads Amiga 500. I guess looking back I was lucky to have access to an Amiga and be part of the game hobbyist scene back in the day when your average person just didn’t play computer games.

Ultimately hardware isn’t everything and the reason why the consoles made such an impact boils down to price and the fact that children could have one in their bedroom (myself included). Gaming wasn’t just for powerful multimedia systems anymore, consoles brought relatively cheap systems that every family could afford to have and thus marked the final death knell of the Amiga platform by the mid 90’s. Commodore had squandered a huge technological advantage for years and it’s failure to react to rising competition brought it to it’s knees. It’s also worth noting that as a games platform the Amiga was massively successful in the UK and across Europe, but did less successfully in the US primarily due to a larger interest in the Japanese arcade gaming culture rather then home computing. Thus the majority of Amiga games (of which there are literally thousands) were made in Europe and in fact the UK pioneered much of the games programming advances of the age that led to some greatly successful games. British studios like Sensible Software and the Bitmap Brothers, and publishers like Psygnosis are legendary and we owe them a lot for what they achieved back in the day, much of which is taken for granted now and forgotten as the fast moving games industry moves ever on like a enraged bull, never stopping to look back at lessons already learned decades ago.

Chaos Engine – Amiga – Bitmap Brothers – Subtle complexities to a simple game

The Stifling of Innovation and Creativity:

To the topic at hand and the question I started the article with. Has game play and design regressed since those days and if so why? Bluntly and unequivocally yes in my opinion,  but the why of it will take some explanation. To understand why you have to look into the past of gaming hence my above context on the Amiga, it’s unavoidable and not simply nostalgic musings. It’s the logical thing to do when analyzing something that has been great in the past, and has become less great over time. As admitted, graphically things have improved, but the root of problem is something that has caused a stifling of innovation leading to regurgitation of the same copy-cat game over and over with different artwork for years on end. The end of the 90’s was perhaps the last true great period of games innovation and creative freedom that professional games developers had. You only have to look at the quality titles released on the PC between 95 and 99 to realise this.

I’ve researched various articles and read interviews featuring leading people who worked in the earlier days and you see similarities in how they view the industry and how it has changed for developers. The core of it seems to be due to the refusal of the increasingly powerful publishers to fund games that at not a 100% safe bet (Call of Duty, Halo etc) and this has led to a massive drop in innovation that is only now perhaps being turned around by the injection of new creative blood by the Indie developer scene. Fueling the increasingly tight and controlling grip of publishers is the increasing vast sums of money that the industry now generates. Many people ARE aware of the lack of innovation but perhaps feel that there’s just no ideas left? Well there’s plenty of ideas around, the problem is that no large publisher would touch it unless it’s proven and that’s the crux of it.

Populous 2 - Amiga - Bullfrog

Populous 2 – Amiga – Bullfrog

John Hare, a founder of Sensible Software (one of the biggest and most successful games company’s of the 80’s and early 90’s) gave a frank and interesting interview on You Tube where he discusses that during those days, publishers were happy to have talented people on board and they pretty much left you to make what you were passionate about and encourage you to push your creativity. It’s not surprising then that if you were ever motivated to go back and play Amiga games now and get over the aging visuals, you’d find a myriad of game genres, some still today undefinable such was the creative freedom back then. This issue of publishers forcing developers to copy existing games, adding just a new paint job is paramount to what is holding back the games industry in my opinion. Yes there’s Kickstarter and Steam Greenlight and they are all well and good, but I feel that the large publishers need to have a dramatic culture change if were are ever truly going to return to a golden age of innovation in game play concepts, design and execution. Perhaps the Indie scene will be the catalyst that fuels the publishers to change and allow more freedom to professional studios?

While the Amiga had it’s day, its fair to say that it was a very 2D orientated  platform and with the coming of 3D and it’s dominance in professional studios it’s not surprising that small man teams of maybe 3 or 4 can no longer produce the par standard graphical expectations in games expected for modern AAA title publishers, whom require dozens of developers and artists and millions invested to produce some of the photo realistic wizardry modern shelf titles feature. But are the incredible graphics and animation a fair trade for the disadvantages it brings?

Level design is something that has most certainly suffered from the introduction of vastly detailed environments now expected in any FPS game. It’s a simple matter of complexity, the more you introduce into a scene, the longer it takes to produce. The longer it takes the less time you have to make complicated and intelligent level design. Thus many “on rails” shooters are just that, a monorail ride with the occasional dead end to “confuse” said player and following satellite navigation way points that show up on your automap, even if the game is set in a medieval fantasy universe *cough* Skyrim.

Personally speaking photo real graphics are not a fair trade and ultimately it’s the game play that keeps you playing a game long after you’ve become desensitized to the pretty visuals. Many hugely successful Indie titles have shown this, surely it’s time for the big AAA studios and publishers to say “let’s strip down the cluttered visual complexity, take a risk and focus on game play “. Wouldn’t that be something? That and actually playing games rather then spending 30% of your time watching dialogue cut scenes. At times I think games have forgotten their roots in the arcade, and have borrowed far to heavily from Hollywood.

A change in audience & social gaming:

Another key factor in the the evolution of the games industry is tied with in turn the evolution of it’s audience. Back in the Hobbyist days of gaming, a period i’d widely class from 1980-1999, most people who sat indoors playing video games were looked at a bit strangely. They were geeks, nerds, predominantly male and it most certainly wasn’t a cool thing to do. They were probably above average at school and i’d be as bold to say statistically more intelligent or at least have an intrigue in things they didn’t understand. This would manifest itself in a way that if you presented a challenging game to a geek, they would be much more likely to try and figure it out and spend time trying to overcome the complexities, like a piece of homework or a maths question. A less motivated individual with less intrigue would put the game down, upset about it being too hard and never play it again. Therefore the audience in a nutshell back then was more mature and forgiving about games and it allowed a degree of freedom to developers to really go to town on sophisticated game play elements that would take time to master and learn, but ultimately paid off long term over simple repetitive games.

Now as pretty much most are aware, nowadays games on the whole are streamlined and simplified for the new average audience demographic, whom is not a geek, nerd or in fact *shock* actually male. Social gaming has brought women into the gaming consumer audience and rightly so, women should be part of it. Men too have lapped up the new social gaming phenomenon but irrespective of gender which is irrelevant, the key point is that the “nerd gamer” is no longer the average demographic and thus games are now being effectively aimed at less patient, casual orientated “non-gamers”. Social games are not games in their truest purest sense, they are not escapism, or adrenaline pumping or a visual feast or inspiring, they are simply a feedback-response stimulus loop that passes time for the bored individual. Engineered game play featuring staggeringly simple repetitive tasks with a carrot style reward at the end. Real games ARE more then that aren’t they? I think so.

Conclusion:

The whole evolution of the industry is a double-edged sword. It’s not all bad certainly, there’s never been an easier time to get into the games industry and there’s certainly a lot more jobs around with better pay then there used to be, however along with vast sums of money has come the bureaucracy that is rife within what is essentially a creative industry and there are startling parallels with the movie industry. Like with games, the increasingly powerful few have begun to control too much of what directors make and the many unneeded remake movies are effectively synonymous with the copy-cat games made today in the games industry. But, I wont lay the blame just on publishers. John Hare mentioned something regarding the fact that the industry is saturated with content and most of it not good or to a high enough quality. This waters down the expectation of what a good game actually is, and with more and more game developers coming into the mix this could spiral further. His solution? Less developers/designers and who are to a higher standard. Is that the answer? I’m not sure but poor games will in-turn inspire more poor games, it’s a vicious circle that we must break and ultimately in my opinion it should start from the top AAA studios and work it’s way down, not the bottom up.

It’s a topic I feel passionate about and there’s no easy answers but that’s my take on it and an opinion from someone who has played far too many games over the past 29 years and hope to influence the games industry in some way (even if just a nano) by making games myself. I hope that in time, developer creativity will flow however it wants wherever it wants and only our imagination will limit where games can take us.

Lemmings – Amiga – DMA Design