CUDA Ray Tracer – Dissertation Project

Featured

After on and off work for a year, and many thousand words later, my final year BSc dissertation project and report was completed. Can a ray tracer ever be truly ‘complete’? This post a brief description and summary of my project.

A download to my full dissertation report can be found below and as well as a few renderings from my prototypes:

Report:

614 Downloads

Prototype Renderings:

The project from a personal point of view was an important one. It was a period where I gained heightened interest in graphics programming, gaining an understanding of the principles of computer graphics, the mathematics involved and also the creative satisfaction that comes from it. When creating realistic virtual graphics from essentially nothing but code, maths and a display, on the face of it, it’s very easy to gloss over the ‘magic’ of it all, especially when you understand the complexity of how we actually perceive the Universe and the shortcuts that must be taken for computers to accurately mimic the natural phenomena of our brain’s visual perception.

 

CUDA Ray Tracer - Dissertation Project

CUDA Ray Tracer – Dissertation Project

 

A Bit of Biology and Philosophy:

The modern computer when you think of it, is really just a primitive extension of our own bodies, simple enough that we can manipulate, manage and understand it, with much greater control and predictability then our biology. They allow us to achieve things we could not otherwise do and many of the components inside a computer carry out very similar roles to organs found within us. Of course we can think of the CPU as a brain, but what else? Going into more detail, the GPU could be seen as a specialised part of the brain engineered to handle visual computation, just as our brain has it’s own visual cortex. A virtual camera in a rendering program replicates the capabilities of part of our eye, defining an aperture or lens through which to calculate rays of light, and like-wise, an ‘image plane’ positioned in front of the camera, carries out essentially the same functionality as our retina, but using pixels to make up the visual image of what we see.

When you understand the detailed steps required to render something in 3D, you realise that we are essential trying to recreate our own little simplified universe, it’s a pretty profound concept that when taken much further, manifests itself in popular science fiction such as the Matrix. After all, is mathematics not simply the ‘code’ of our Universe? It’s perhaps not as silly as it may sound, when you get down to the fundamentals of game developers creating virtual worlds, graphics programming being an essential component, and looking just how real and immersive these worlds are starting to become.

So What Is Ray Tracing?

Ray Tracing

Of all popular rendering techniques, it’s ray tracing that perhaps stands out the most in respect to my previous comments above. We all know roughly how and why we see, where light rays shine from a light source such as our Sun, they travel millions of miles to get to us and out of all the infinite number of rays, the tiniest percentage may find it’s way directly into our eye. This could be from directly looking at the Sun (not recommended!), and also from scattered or reflected light that has hit a surface, finding it’s way on a collision course with our eye.
This is fundamentally close to how ray tracing works, but with important differences. If a computer had to calculate the trajectory of all possible rays been fired out from a light source, this would be impossible with modern hardware, there are just too many potential rays, of which, only an infinitesimally small amount would ever find there way into the camera (eye) of the scene, and it’s only these rays we are interested in anyway. Instead, and referred to as ‘Backwards Ray Tracing’, light is fired from the camera (eye) into the scene and then traced backwards as it is reflected, refracted or simply absorbed by whatever material it hits. We then only have to fire a ray from the camera for each pixel in the image, which is still potentially a considerable number of rays (1920×1080 = 2073600 primary rays) and that’s without counting all the secondary rays as light scatters throughout the scene, but at least this reduced number is quite feasible.

Still, it is ray tracing’s close semblance to how light interacts with us in the real world that makes it a very elegant and simple algorithm for rendering images, allowing for what is known as ‘physically based rendering’, where light is simulated to create realistic looking scenes with mathematically accurate shadows, caustics and more advanced features such as ‘global illumination’, something that other faster and more common rendering techniques like rasterization (pipeline-based) cannot do.

Illumination and Shading:

Phong Shading

Phong shading

The ultimate main job of firing the rays into a scene in the first place is to determine what colour the pixel in our image should be. This is found by looking at what a ray hits when fired into a 3D scene. Put simply, if it hits a red sphere, the pixel is set to red. We can define the material information for every object in the scene in similar fashion to how we know in the real world that a matt yellow box reflects light. Technically, the box is yellow because it reflects yellow light, and is matt (not shiny) because it has a microscopically uneven surface (diffuse) that scatters the light more evenly away from the surface. Compare this to light hitting a smooth (specular) surface, most of the light would bounce off the surface in the same direction and appear shiny to our eyes. Clearly, for computer graphics, we are not likely to program a surface material literally in such microscopic detail as to define if it is rough or smooth, but we can cheat using a popular and effective local illumination model such as Phong, essentially using the ‘normal’ of a surface, the directions of our light source and camera and some vector math to put it all together and calculate the colour of the surface based on it’s material and angle, creating a smooth shaded object rather than a ‘flat’ colour.

Intersections, Distance Functions and Ray Marching:

Implicit Functions

So we know why we need to fire the rays, but how do know a ray has hit a surface? There’s a few different ways this can be done, all down to the complexity of the geometry you’re trying to render. Ray intersections with simple shape such as planes or spheres can be calculated precisely using linear and quadratic equations respectively. Additionally, for complex explicit 3D models made from triangle mesh, linear algebra and vector math can also be used to compute the intersections.

Another technique, has been gaining popularity in recent years, despite been around quite some time in academic circles. Rendering complex implicit geometry using ‘distance functions’ with nothing but a pixel shader on your GPU as shown on websites like Shadertoy have popularised a subset of ray tracing called ‘ray marching’, requiring no 3D mesh models, vertices or even textures to produce startlingly realistic real-time 3D renderings. It is in fact, the very freedom from mesh constraints that is apparent when you observe the complex, organic and smooth ray marched geometry possible using the technique. Ray marching allows you to do things you simply cannot do using explicit mesh, such as blending surfaces seamlessly together, akin to sticking two lumps of clay together to form a more complicated object. Endless repetition of objects throughout a scene at little extra cost using simple modulus maths is another nifty trick allowing for infinite scenes. By manipulating the surface positions along cast rays, you can effectively transform your objects, twist, contort and even animate; it’s all good stuff.

The Dissertation Project:

My development project was comprised of two parts, a prototype phase to create a ray tracer using GPGPU techniques and a hefty report detailing the theory, implementation and outcomes. For those unfamiliar, General-purpose computing on graphics processing units (GPGPU) is a area of programming aimed at using the specialised hardware found in GPU’s to perform arithmetic tasks normally carried out by the CPU, and is widely used in supercomputing. Though the CPU hardware is singularly much more powerful than the processors in a GPU; GPU’s make up for it in sheer numbers, meaning they excel and outperform CPU’s when computing simple highly parallel tasks. Ray tracing, is one such highly parallel candidate that is well suited to GPGPU techniques and for my dissertation I was tasked to use NVIDIA’s GPGPU framework called CUDA to create an offline ray tracer, done from scratch using no existing graphics API. Offline rendering means not real-time, and is clearly unsuitable for games, yet is commonly used in 3D graphics industry for big budget animations like those by Pixar and DreamWorks, with each frame individually rendered to ultra high quality, sometimes over a period in excess of 24 hours for a single frame.

In the end I produced four different ray tracing prototypes for comparison, incorporating previously mentioned techniques. Prototype 1, running purely on a CPU single thread using simple implicit intersections of spheres and planes. Prototype 2, the same but implemented using a single CUDA kernel and running purely on the GPU across millions of threads. Prototype 3, a CPU ray marcher using distance functions to render more complex implicit geometry. Prototype 4, the same as 3, but implemented using CUDA. My aim for the project was to assess GPGPU performance and the rendering qualities of the ray marching technique, the findings of which can be found in the report.

I knew when I picked this project that I was not taking an easy topic by any stretch, and a great thing I can take away from this is the extensive research experience and planning needed to simultaneously implement many different difficult concepts I had no prior knowledge about, yet still managed to produce a cohesive project, and fully working prototypes, achieving an 88% mark for my efforts, which I am very pleased with. As expected, with heinsight there are things that I would do differently if repeated, but nothing too major, and really, it’s all part of the learning process.

Ray tracing, ray marching, GPGPU, CUDA, distance functions and implicit geometry were all concepts I had to pickup and learn. I bought some books, but in the end, research on the internet in the form of tutorials, blogs, academic papers and lectures proved more beneficial. Sometimes, it takes a certain kind of way to present the information for your brain to ‘click’ with certain principles, and all of us are different. The Internet is a treasure trove in this regard, if you spend the time, you can usually eventually find some explanation that will suit your grey matter, failing that, re-reading it a million times can sometimes help!

Future Plans:

On the back of this, I will be continuing this subject into my masters degree and will likely be pursuing this further during my masters dissertation. I am already busy at work on a real-time implicit render with UI functionality running in DirectX 11 (A couple of early screenshots above). Additionally, I’d love to get a chance to contribute to a research paper on the subject, but we’ll see.

I plan to make some easy to follow tutorials on implementing ray tracing and ray marching at some point for this website, when I get the chance. Hopefully, they could  help out other students or anyone else wanting to learn the aforementioned topics. I know first hand and from friends, that at times it can be frustrating since although there is theory out there, there is comparatively very little information on actual implementation details for the subject, when compared to say pipeline-based rendering.

Dungeon Master – An Iconic RPG

Box Art

Box Art

Aged probably no more than 6, I looked on in excitement and fear at the Amiga monitor. My parents were playing Dungeon Master again, it’s labyrinthine dungeons, fiendish puzzles, stunning graphics (for the time) and always death, waiting around the next corner.

Dungeon Master was a pivotal game of my childhood, it taught me how real and immersive games could actually get, despite computer limitations. Using a 2D perspective trick, it could render a seemingly 3D environment as if seen from the eyes of the player. This of course was an illusion, but it did it so effectively, that it stood out back then with hugely impressive visuals. It wasn’t just nice to look at though; featuring groundbreaking level design and puzzle concepts, being brutally difficult but still rewarding; there was something about it that left a lasting impression on you. It was a little like the Dark Souls of it’s day.

Although there had been other well known ‘dungeon crawler’ games (as they came to be known) like Bard’s Tale and Wizardry, it was DM that really culminated the best attributes of the genre, distilling it into what is in my opinion the best of the lot, even to this day. It’s no coincidence that Almost Human’s ‘Legend of Grimrock’ in 2012, cited Dungeon Master as large inspiration and something that is clearly evident having finished Grimrock and noticing many ‘tips of the hat’ to DM’s puzzles, mechanics and creatures. All those puzzles of putting an item on a pressure plate to close a pit, or placing a torch in a wall sconce to open a secret door hearken back to this era.

DM was in fact the largest selling title of all time for the Atari ST, whose version differed only mildly from that of the Amiga, with the latter featuring improved 3D sound effects where most noticeably, you can hear creatures moving around with unnerving effect.

Using the free Amiga emulator WinUAE the past week, I have finally finished Dungeon Master after all these years. I loved every second of it, scarily so, because I was telling myself continually throughout, “why am I playing a 27 year old game in this day and age?”. Irrespective of the answer, I had more fun playing it then most state of the art games I have played recently! Why? Well, many reasons, the challenge and immersion are two, but ultimately, I guess I’m a pretty hardcore gamer and there’s just something about playing old school classic RPG’s, a charm or ambiance if you like, akin to rolling that dice in a pen and paper D&D game. I’m sure many can empathize with that.

A pack of skeletons.

A pack of skeletons.

Dungeon Master does have a story and plot, though sparse and not a driving force for the progression of the game. It revolves around having to descend into the depth of the mysterious dungeon and find an artifact known as the ‘Firestaff’, as tasked by your master ‘Lord Order’. Ultimately, if your party survive the horrors long enough, you come across writings detailing the evils that will occur should you complete this quest and instead come to realise that you must descend to the deepest depths of the dungeon, combine the staff with the ‘power gem’ and defeat ‘Lord Chaos’ (think Sauron), restoring ‘Balance’ to the world.

You start the journey in the ‘Hall of Champions’, a place at the start of the dungeon where you can look upon windows on the walls and see magically suspended heroes, whom you can either ‘resurrect’ or ‘reincarnate’ to join your party, up to a total of four party members. Resurrection ensures the character maintains it’s identity, combat skills and experiences whereas reincarnation allows you to rename the character, forfeiting their skill set, but gifting them enhanced physical attributes so to enhance learning and allow you to shape the character as you see fit. Ultimately, the tried and tested composition of two fighters at the front, a priest and a wizard at the back worked wonders for my play-through, though having four ‘jack-of-all-trades’ is viable too. As 2012’s Grimrock, the party moves through the dungeon in 1st-person view in a 2 by 2 formation, meaning that only the two members of your party at the front can reach enemies with melee weapons, with the back two having to rely on ranged, throwing weapons and spell casting. Consequently, only the front two will take damage from the front, and if a ‘baddie’ creeps up behind you, your squishy casters won’t be very happy. Part of the the games meta strategy, involves you being able to change your players around in the formation at any time e.g if your front fighters get wounded, you can swap them out with the back.

Character Inventory.

Character Inventory.

The predominate theme of the game is undoubtedly ‘survival’. Staying alive is really, really not an easy thing unless you have learned the tricks and techniques generally gained after many horrible deaths, whether that be to the jaws of giant worms, starvation or plummeting down a pit arriving several levels lower than you could possibly hope to deal with. The only items you have at your disposal are those you find along the way, and that way is strewn with illusory walls, guarded chests, locked doors and secret passages that without consulting a guide or a printed map, you have little chance of ever finding yourself (hand holding pfff who needs it?). Even basic concepts we all take for granted in games today such as being able to SEE, is a premeditated game mechanic in dungeon master, where the dungeons are pitch black without a light source and torches are scarce, making the use of a wizard or others with the skills to cast ‘light’ spells essential.

One of the  most memorable mechanics which is still pretty innovative today is the magic system. To the right side of the screen are a bunch of runic symbols. The boxed game’s manual documented an alphabet of these symbols describing there purpose. As you cast a spell you first choose a rune representing the ‘power’ of the spell, would you cast a short duration spell or a potent offensive spell for instance? Then sequentially you chose the spells ‘elemental influence’, ‘form’ and ‘alignment’. It all sounds rather complicated but when you know off by heart that a weak fireball is LO FUL IR and a potent healing potion is PAL VI, it starts to become second nature, especially when you realise you can drop the power level if your priest is low on mana and make a weaker healing potion like LO VI for instance. In combat, you would be expected to click these runes in the correct order at the heat of the moment, you soon realised that if you didn’t ‘get gud’ and memorise them, then you simply got ‘dead’. In a funny kind of way, it really did feel like you were learning magic and having to go through the motions to learn and cast the spells your party depended on and I love that.

These runes are used to cast all the games many Wizard and Priest spells.

These runes are used to cast all the games Wizard and Priest spells.

Other mechanics such as food and drink meters for each character really puts that hanging dread over your party for the entirety of the game, since you never know when your next meal is coming up and when you’ll see a fountain again to refill your water skins. Realising your lost deep somewhere with no water left and down to your last couple of hunks of meat is pretty terrifying. Luckily though, some of the critters are edible if you can kill them, ‘Screamer Slice’, ‘Worm Round’ or ‘Dragon Steak’ anyone? Yum!

A water fountain, always a welcome sight!

A water fountain, always a welcome sight!

Having finally finished the game after all these years, I felt an immense sense of accomplishment because it’s a game that I grew up thinking was simply too tough for me to contend with, and to be fair, me being less than 10, it probably was! The end showdown with Lord Chaos is no simple matter. Once you have collected all the ‘Ra keys’, broken into the vault of the Firestaff, defeated it’s Stone Golem guardians and retrieved it, you then have to descend to the last level, defeat a wingless dragon and free the power gem with a spell you better have learned along the way or your buggered! (*cough* Google). You then must combine the staff with the gem, creating an ultimate weapon and then go back up a level and find Lord Chaos. Using the staff’s power you must surround him with ‘Fluxcages’ and finally ‘Fuse’ him to restore ‘Lord Balance’ and beat the game. If that sounds straightforward, it really isn’t, especially considering even if you do figure this all out on your own from a subtle hint in an very well hidden scroll, you have to do all this while being attacked by demons, black flame elementals and Chaos himself flinging fireballs at your face!

Defeating Lord Chaos and restoring Balance.

Defeating Lord Chaos and restoring Balance.

I’m currently now playing through it’s sequel ‘Chaos Strikes Back’ (Yes…it really IS called that) and it is unbelievably hard, as in Dark Souls has nothing, not a bean compared to this in terms of difficulty. CSM will eat you alive and then spit out your regurgitated remains for a second helping. Firstly, the sequel starts at the same difficultly level that DM ends at. You can import your characters which you may think will help and sure enough it does a little, but little prepares you for the first 10 seconds of the game which pretty much goes like this:

“Ok, let’s go, hmm it’s pitch black…where am I? My party is naked with no weapons…I can hear things moving around me…let me cast a light spell. That’s better! SHIT there’s four armoured worms in here with me and no exits…no wait, SIX worms…EIGHT….I’m surrounded…can’t move….DEAD.”

That’s your first taste of Chaos Strikes Back, shoved into an infested pit of worms with no weapons and no obvious way out, but like Dark Souls, I still love it.

As reported in 2012 in a Rock, Paper, Shotgun article here, one chap amazingly spent 6 months, eight hours a day of his own time, programming 120,000 lines of code to port the Atari ST version, creating a C++ executable version that runs today on any modern PC. It can be found here free: Chaos Strikes Back for Windows (and Linux, MacOS X, Pocket PC)

For those used to emulators, by getting hold of the Amiga .adf ROM file (basically an image of the game disk), you can run it in WinUAE (my personal preference for the better sounds) but ultimately, only ex-Amiga junkies would likely do this over the ported PC version :D.

Dungeon Master is a truly iconic game that has undoubtedly influenced many great games, not just across the dungeon-crawler and RPG genres like the classic ‘Eye of the Beholder’ series or recently the ‘Legend of Grimrock’, but also modern popular AAA titles such as the Elder Scrolls. It’s a testament to it’s influence that the game still has it’s own updated Encyclopedia site: http://dmweb.free.fr/  and even an online message forum with an active and thriving community: http://www.dungeon-master.com/forum/.

I would encourage anyone who is curious about classic RPG’s games, interested in why modern games are like they are and all that jazz to check out old titles like Dungeon Master, because although the graphics leave much to be desired by today’s standards, the game play is still truly as good as it ever was. It’s clear there is still much to wonder and marvel at, in both game design and execution in this old gem.

My party in the hall of fame after beating the game!

My party in the hall of fame after beating the game!

 

OpenGL Cross-platform PC/PSP Game Coursework

Last semester as part of the Advanced Graphics module of my CS degree at Hull University, we were tasked with a group project to produce a cross-platform OpenGL mini-game for the PC and Sony PSP based on a specification. The game premise was to move around a 3D ‘maze’ consisting of four rooms and connecting corridors, avoiding a patrolling AI that would shoot you if within its line of sight. The objective was to collect 3 keys to activate a portal to escape and beat the game.

The groups were selected at complete random with 4 members. As per usual, group coursework assignments are particularly difficult due to the extra concerns of motivating members and assigning work and by year 3 of University, you get a good idea on the best way of operating within them to secure good grades. I went in with the mindset of doing as much work as possible after we assigned tasks. Hopefully each would carry out their allocated work, if not, I’d just go ahead and do it, no fuss. Luckily one chap in my group was a friend and he did an excellent job coding the AI, mini-map and sound while I worked on coding the geometry, camera, lighting and player functionality etc.

1

Mini-maze model

Static environment lighting

Static environment lighting

Cross-Platform Limitations:

Having worked with OpenGL and shaders last year for my 3D ‘The Column‘ project, it was some-what limiting when I realised that the PSP didn’t support them and that fragment-based lighting was a no go. With one requirement of the game being a torchlight effect that illuminated the geometry, this would therefore mean that for PSP compatibility, vertex-based lighting would need to be implemented and that meant tessellation of primitives to prevent the lighting looking very blocky and…well very 90’s. Luckily the PSP did atleast have support for VBO (Vertex Buffer Objects) which meant effectively each tessellated model could loaded onto the graphics card only once to improve performance.

Unified Code

An interesting aspect of this project was the required consideration for a consolidated code-base that where possible allowed shared functionality for both the PC and PSP platforms i.e limiting how much platform specific code was used. This was essential since the game would be a single C++ Solution for both platforms.

I designed the code structure based around principles Darren McKie (the course lecturer) described, and produced the following class diagram that reflects the final structure:

Unified Cross-platform Class Diagram

Unified Cross-platform Class Diagram

The majority of game code resides in ‘Common Code’ classes that are instantiated by each particular platform ‘Game’ object. Certain code such as API rendering calls were kept platform specific but made use of the common classes where necessary. A particular nice way of ensuring the correct platform specific object was instantiated was carried out using ‘#Ifdef’, ‘#ifndef’ preprocessor statements and handled by a ‘ResourceManager’ class.

As mentioned earlier, per-vertex lighting had to be implemented due to PSP compatibility. A primitive with a low number of vertices would thus result in very blocky lighting. To prevent this I created a tessellation function that subdivided each primitives vertices into many more triangles. I played around with the tessellation depth to find how many iterations of subdivision could be achieved before inducing lag and was very happy with the lighting result considering there is no fragment shader; a given for today’s modern pipelined-based rendering.

Active Portal

Active Portal

The PSP implementation proved more tricky due to getting to grips with the PSP SDK and having access to very little documentation, however the game was successfully implemented onto a PSP device and ran with decent performance after compressing the textures down and removing geometry tessellation to allow for the PSP’s limited memory capacity.

The game was written in C++ and  the following libraries and software were used:

  • GXBase OpenGL API
  • Sony PSP SDK
  • OpenAL
  • Visual Studio 2012
  • Paint .Net

Exchange Reports Project Overview

During this summer, in-between semesters I was fortunate enough to get a software development job for a local company just 10 minutes walk from my door. The project was to produce an ‘Exchange Reports’ system that would provide email messaging statistics exactly to the customers specification. The system would be automated so that after reports were designed, they would be generated programmatically by a service and emailed to any recipients that had been setup to receive each report. The solution was to be comprised of 3 distinct programs that would need to be developed along with configuration tools to setup the non-GUI processes in the solution (namely the services).

I have produced the following diagram to demonstrate the solutions processes, ( Click to enlarge):

The design was in place when I started and an existing code-base was also present, but still required the vast majority of the functionality to be added. It was the first time having worked professionally as a software engineer and therefore also the first time getting to grips with existing code made by developers no longer around. More so, understanding the solutions technical proposal well enough to execute exactly what the customer and my employer wanted. I think working in IT professionally for a lot of years certainly helped me get into a comfortable stride after an initial information overload when taking on solely what was a surprisingly large but beneficial technical project compared to what I had envisioned. Being thrown into the deep end is probably the fastest way you can improve and I feel that above all, I have taken a lot from this experience which will prove valuable in the future. I’m very pleased with the outcome and successfully got all the core functionality in and finished in the time frame that was assigned. I whole heartily would encourage students thinking of getting professional experience to go for it, ideally with an established company from which you can learn a great deal. Having experienced developers around to run things by is great way to improve.

Now onto the technical details. The project was coded in C# and used WinForms for initial testing of processes and later for the configuration programs. I used a set of third-party .NET development tools from ‘DevExpress’ that proved to be fantastic and a massive boon to those wanting to create quick and great looking UI’s with reporting functionality. SQL Server provided the relational database functionality, an experience I found very positive and very much enjoyed the power of Query Language when it came to manipulating data via .NET data tables, data adapters, table joins or just simple direct commands.

Using the diagram as a reference, I’ll briefly go through each process in the solution for A) those interested in such things and B) future reference for myself while it’s still fresh in my mind because i’ll likely forget much of how the system works after a few months of 3D graphics programming and Uni coursework :P.

Exchange Message Logs: 

In Exchange 2010 Message Tracking logs can be enabled quite simply and provide a wealth of information that can be used for analysis and reporting if so desired. They come in the form of comma delimited log files that can be opened simply with a text editor. They have been around a lot of years and in the past during IT support work I have found myself looking at them from time to time to diagnose various issues. This time I’d be using them as the source of data for a whole reporting system. The customer was a large international company and to give an example from just one Exchange system they were producing 40 MB-worth of these messaging logs each day. With these being effectively just text files that’s an awful lot of email data to deal with.

Processing Service: 

The first of 3 core components of the solution, the Processing Service as the name suggests is an install-able Windows Service that resides on a server with access to the Exchange Messaging log files. The service is coded to run daily at a specified time and it’s purpose is comprised of 5 stages:

1. Connect to the Exchange server and retrieve a list of users from the Global Address List (GAL). This is done using a third-party Outlook library called ‘Redemption’ that enables this information to be extracted and then check it for any changes to existing users and/or any new users. The users are placed in a table on the SQL database server and will be used later to provide full name and department information for each email message we store.

2. Next, each Exchange Message log is individually parsed and useful messaging information is extracted and stored into various tables on the database server. Parsed log file names are kept track of in the database  to prevent reading logs more than once.

3. Any message forwards or replies are identified and tallied up.

4. A separate Summary table on the database is populated with data processed from the prior mentioned message tables. This table is what the reports will look at to generate data. Various calculations are made such as time difference between an email being received and then forwarded or replied to gauge estimates of response times being just one example; a whole plethora of fields are populated in this table, much more than could comfortably fit on a single report. Due to this large amount of potentially desirable data we later allow the user to select which fields they want from the Summary table in the ‘Report Manager’ if they wish to create a custom report or alternatively and more typically, they use predefined database ‘Views’ that have been created for them based on the customers specification which allows them to access only the data they need. Database Views are a really neat feature.

5. The databases Messaging tables are scoured for old records beyond a threshold period and deleted. This maintenance is essential to prevent table sizes growing too large. Their associated Summary data that has been generated is still kept however but I added functionality to archive this by serializing this data off and deleting it from the database if required.

Report Manager:

Initially we had thought to utilise DevExpress’s ‘Data Grid’ object controls in a custom Form application but we decided that the appearance of the reports that were generated from this were not satisfactory. This turned out to be a good design decision since we later discovered DevExpress has remarkable reporting controls that allow very powerful design and presentation features that completely overshadowed that of the Data Grids. After some migrating of code from the old ‘Report Manager’ program and having to spend a day or two researching and familiarising myself with the DevExpress API I had a great looking new application that the customer will be using to design and manage the reports.

Report Manager program

Report Manager program

The Report Manager allows you to design every aspect of a report through an intuitive drag and drop interface. Images and various graphics can also be added to beautify the design, though that wasn’t something I did nor had the time to attempt! The data objects can be arranged as desired and the ‘data source’ information for the report is saved along with it’s design layout via a neat serialization function inherent to the ‘XtraReport’ object in the DevExpress library which is then stored in a reports table on the database server for later loading or building. You can also generate the report on-the-fly and export it into various formats such as PDF or simply print it. Another neat built-in feature is the ability to issue SQL query commands using a user-friendly filter for non-developers in the report designer which is then stored along with the layout, thus the user designing the report has absolute control over the data i.e a quick filter based on Department being “Customer Services” would return only that related message data without me needing to code in some method to do this manually like was the case when using the Data Grids.

In the top left you’ll see specific icons that provide the necessary plumbing for the database server. ‘Save’, ‘Save As’ and ‘Load’ respectively writes the serialized report layout to the database, creates a new record with said layout or loads an existing saved report from the database into the designer. Loading is achieved by retrieving the list of report records stored in the reports table and placing it into a Data Grid control on a form where you can select a report to load or delete. The ‘Recipients’ button brings up the interface to manage adding users who want to receive the report by email, this retrieves the user data imported by the Processing Service and populates a control that allows you to search through and select a user or manually type a name and email address to add a custom recipient. Additionally, upon adding a recipient to the report you must select whether they wish to receive the report on a daily, weekly or monthly basis. This information is then stored in the aptly named recipient table and then relates to the reports via a reportID field.

Report Service:

Nearly there (if you’ve made it this far well done), the last piece in the solution is another Windows Service called the ‘Report Service’. This program sits and waits to run as per a schedule that can be determined by a configuration app that i’ll mention shortly. Like the Processing Service, as part of it’s logic, it needs to check if it’s the right time of the day to execute the program, of course the service continuously polls itself every few minutes to see if this is the case. Upon running it looks to see if it’s the right day for daily reports, day of week for weekly reports, or day of month for the (you guessed it) monthly reports. If it is, it then it runs and grabs the ‘joined’ data from the reports and recipient tables and proceeds to build each report and fire them out as PDF email attachments to the associated recipients. It makes a final note of the last time it ran to prevent it repeatedly running on each valid day.

Configuration Tools:

Two configuration apps were made, one for the Processing Service and one for the Report Service. These two services have no interfaces since they run silently in the background, so I provided a method via an XML settings file and the two apps to store a variety of important data such as SQL connection strings, server authentication details (encrypted) and additionally also through the need to provide certain manual debugging options that may need to be executed as well as providing an interface to set both services run times and the report delivery schedule.

Screens below (click to enlarge):

So that’s the solution start to finish, depending on time I’m told it’s possible it could be turned into a product at some point which would be great since other customers could potentially benefit from it too.

The great thing about a creative industry like programming, whether business or games, is that you’re ultimately creating a product for someone to use. It’s nice to know people somewhere will be getting use and function out of something you have made and just one reason why I’ve thoroughly enjoyed working on the project. I’ve learned a lot from my colleagues while working on it and hope to work with them again. You also get a taste for real life professional development and how it differs in various ways to academic teachings, which although are very logical and sensible are also idealistic (and rightly so) but in the real-world when time is money and you need to turn around projects to sustain the ebb and flow of business, you have to do things in a realistic fashion that might mean cutting some corners when it comes to programming or software design disciplines. I always try my best to write as clean code as possible and this was no exception but ultimately you need to the get the project done first and foremost and it’s interesting how that can alter the way software development pans out with regards perhaps to niceties like extensive documentation, ‘Use Case’ diagrams and robust unit testing potentially falling to to the wayside in favor of a more speedy short-term turn around. Certainly I imagine, larger businesses can afford to manage these extra processes to great effect, but for small teams of developers it’s not always realistic, which I can now understand.

Falling In with Fallout

fallout-3-1010-small

Over the course of the past year I’ve been working my way through the newer Fallout games specifically Fallout 3 and New Vegas. I finished Fallout 3 a couple of months back and when I say “finished” I mean 95%+ of all the content, I completed every expansion pack except for Mothership Zeta, visited nearly every location in the world maps (184 out of the 200+ excluding Zeta) and generally lost myself in what is in my opinion one of the finest role-playing experiences to be had.

Getting It…:

I didn’t always feel this way about Fallout 3, like many I played it when it first came out and meandered through it for a few hours before losing my way and getting rather bored trawling through endless metro stations. I decided to pick it up and give it another crack several years later and have never looked back since.

I’m not sure whether its myself that changed or not but this time the games magical atmosphere enthralled me and I can say happily I loved every minute of it. The dawning realisation that Fallout 3’s strength is not in it’s rather mediocre storyline but the sandbox and open world game play. This game has that incredibly hard to come-by feeling of authenticity that allows the game through your imagination to create it’s own believably unique stories from your actions with your character and the decisions you make throughout playing.

2012-09-27_00005

An Authentic World:

The authenticity of Fallout 3’s world is achieved from a variety of factors and surprisingly barely any of them from actual talking NPC characters. Most of the atmosphere is created through the futuristic 1950’s themed timeline that emanates charm, naivety and an innocence in stark contrast to the brutal and barbaric post-war world.

The world is also sentimental which on a personal level is quite touching. Imagine if the world had been destroyed and your a generation of survivors who have only known the wasteland of the aftermath, how would the ruins of the past world seem to you? Would the world before Armageddon seem alien to you or comforting? To see the pre-war world frozen in time as the atomic bombs hit, families huddled in their homes, people going to work, people packing their bags in preparation of the nuclear war but clearly too late, you wander the wasteland of the old world and can’t help but be touched by the sentiment created by Bethesda. Pompeii and it’s destruction at the hands of Vesuvius draw eerie parallels, today you can still wander the ruins of the city and see it’s citizens frozen in time by the molten ash flow that covered them.

Untold Stories:

If there were anyone who worked on the Fallout games (both Fallout 3 and Obsidian’s New Vegas) who’s hand I’d like to shake the most it’s those responsible for the creating the myriad of untold stories that litter the wasteland and these people as much as anyone help forge that authenticate world.

Times such as walking into a shack and seeing a skeleton in a bath-tub…with a toaster are moments of genius that will stay with me. It leaves you wondering, who was that person? Why did they resort to suicide? Were they a good person or bad? Your imagination goes into overdrive and it fleshes out the world beautifully.

I wonder what the story behind this guy was?

I wonder what the story behind this guy was?

Just one of many hilarious easter eggs to be found.

Epic.

The Originals:

Now I’m not going to write an article on Fallout without mentioning the original Fallout games. These of course made a lot of what Bethesda built upon when making FO3, and not everyone thinks that they went in the right direction. Fallout 1 and 2 are brutal games and the world is darker and grittier then that portrayed in FO3 that’s for sure. New Vegas goes some way to fixing this, being an all round darker game but since I’m still currently playing New Vegas I’ll not comment much on it until I’ve completed it.

fallout

I can say that I’m a little ashamed at having not finished the original Fallout games and I will be fixing that when time permits, nevertheless having played Fallout 1 upon it’s release I loved it and it’s influence over all post-apocalyptic games since is apparent (as in-turn is the Mad Max influence over the Fallout universe).

Branding:

Much of the character in Fallout stems from the excellent and original branding established mainly in the original games whether it’s Nuka-Cola, Robco or Vault-Tec or the imaginative array of narcotics and drugs such as Mentats, Jet, Pscho and Rad-X. They are so shoved in your face whether through subliminal advertising in game through posters and artwork or constantly seeing their logos on items and in-game memorabilia that sometimes I have to pinch myself to realise Nuka-Cola doesn’t actually exist and that I can’t just go and buy a bottle!

Probably every main brand in Fallout has some part to play in FO3 and that’s one reason I love it. Whether it’s visiting the Nuka-cola plant and hacking into long abandoned employee terminals or collecting rare Nuka-Cola Quantums for a obsessed fan out in the wasteland you learn about the brand and it’s history and what kind of business they really were. It’s infectious and as of right now my phone is proudly sporting a Pip-Boy HUD picture and my desk has a bobble-head on it.

Late game Enclave incinerator troops are pretty tough, nothing the Alien Blaster can't handle though!

Late game Enclave incinerator troops are pretty tough. Nothing the Alien Blaster can’t handle though!

V.A.T.S:

One thing Bethesda really “hit the nail on the head” with is the remarkable V.A.T.S system which integrates first-person real-time combat with a turn-based location targeting system.   Quite simply, it works and works marvellously. With V.A.T.S combat plays out   cinematically akin to Robin Hood: Prince of Thieves famous arrow view camera. It never ceases to be entertaining to watch limbs and heads explode and even eyes pop out! The violence of the originals was left in place because the Fallout universe is brutal and quite rightly doesn’t shy away from adult themes. It’s no kids game and as such emphasises the brutality of a world in a post-apocalyptic environment rife with slavery, raiders, cannibalism and mutated horrors.

Your typical bloody aftermath from a VATS combat.

Your typical bloody aftermath from V.A.T.S combat.

Conclusion:

I could likely go on far more about the Fallout games and quite possibly will later on when I complete New Vegas. Fallout 3 is a hidden gem in my eyes, it received wide acclaim but perhaps unjustly less so then the Elderscrolls games like Oblivion and Skyrim. It’s likely a topic for a whole new blog but Fallout 3 surpasses the post Morrowind Elderscrolls games in doing what any good RPG should do, creating a believable authentic and original world that you can escape into, and more so Fallout does it with a dry wit which doesn’t take itself too seriously, something Skyrim most certainly did do.

To end on, below are some of my end game character stats from FO3, Garviel the wasteland wanderer was godlike by the end but this certainly didn’t detract from the fun, “one-shotting” heads off with a scoped magnum never did get tiresome!

ScreenShot3

ScreenShot5

ScreenShot25

Hypermorph Wins Three Thing Game Competition

So it’s been a frantic couple of weeks, plenty of course-work to do and last weekend was the much anticipated Three Thing Game competition. For anyone not in the know this is held each semester at Hull University and challenges teams to come up with a game based around three auctioned words per team. Judges then score based on the games relevance to the words and the quality/fun of the game. The competition involves a marathon 24 hour programming session to get your game finished on the day. This one was the biggest yet with 39 teams competing. We really couldn’t have asked for better “Things” because a combination of good bidding and luck meant we came out with “Flying”, “Tank” and “Bombs”. Considering another team got “Teddy bear”,  “Deodorant”  and “Pop Tart” I think we did ok!

Last year we came second with Shear Carnage and and I can say that honestly this year, we really really wanted to win it. This was evident to myself just by the focus we had this year and when the day of the competition came, I think I probably left my seat half a dozen times in the whole 24 hours! In hindsight we probably took it far too seriously and as a result I think it sacrificed a lot of the enjoyment of the competition and resulted in some contention regarding ideas that seemed inevitable considering vested interests and no one leader within the team. I think on a personal note, much was learnt regarding team work and there are aspects of the planning and design process I would do differently next time. Luckily it all turned out worth it in the end and so it’s very hard to regret any decisions, but this was by no means a painless endeavour!

Me on the right, Russ in the middle, John on the left. Lee Stott at the back.

So to the game, Hypermorph is a retro-style side scrolling shooter that takes me back to my childhood days, playing classics such as Xenon 2, R-Type and Menace on the Amiga. Back then the shoot’em’up was a staple video game genre and was hugely popular, now only since the mobile platforms have taken off is the genre again feasible because it’s the perfect style of game to have a quick blast on when wanting to pass a little bit of time. The  thing that’s pretty novel in Hypermorph is the ability for the player to switch between two different forms, a spaceship and a hover tank by simply tapping the screen. We made the game using XNA (C#) for the Windows Phone 7 and coded everything ourselves (no third party libraries).

I produced the art for the game and managing both the art and doing a lot of the programming was a challenge in itself on the day, resulting in most of the art being done in the last few hours. I had a good idea in my head what the game would look like when we were bouncing the initial idea around, however my regret was that I didn’t produce any concept art for it sooner to put the rest of the team at ease; for a long time I think we were left with our own ideas for how the game would look but once I came up with the first concept drawing for the ship, the team were all in favour to my relief!

We had decided to make the game quite dark and moody but with bright weapon and explosion effects to make them really stand out. Additionally, we wanted to make the controls as hands off as possible. We learned from Shear Carnage that using touch too frequently can result in obscuring a lot of the screen so we instead went for a tilt based movement for the player and a single touch to morph between Tank and Spaceship. Importantly we set it to auto-fire constantly since you soon realise that in this genre there’s never a time you don’t want to be firing.

One feature I’m really pleased we put in was the voice effects for powerups and various other things. It adds a lot to the immersion and again, really goes back to the genres roots.

Of course we have plans to get Hypermorph out on both the WP7 and Windows 8 market ASAP but uni coursework is currently being prioritised. At the competition was Lee Stott from Microsoft and guys from the Monogame team. Lee’s encouragement was inspiring and I’d also like to thank him and Microsoft for providing the cool prizes. The Monogame guys were brilliant and we spent a fair time chatting with them regarding getting our games ported to the various platforms, they even ported Shear Carnage and my Robocleaner game for us to show us how easy it is! (albeit there’s some coding required to get them ready for the marketplace).

Ultimately we are going to want to put in a few more levels, enemy types, weapons and powerups before getting it on the marketplace, but the good news is it will most certainly be free!

All in all it was overwhelming and the encouragement we have received from Lee Stott, Rob Miles and the MonoGame guys was great. Ultimately this is why I gave up a career in IT to get into the games industry, because there’s so much satisfaction in putting your heart and soul into producing a game and then seeing others get a lot enjoyment from it. Winning the Peoples Choice award as well as the judges award was the icing on the cake and I’d like to thank everyone who voted for us and gave us great feedback.

Stay tuned for more Hypermorph news soon…