Hybrid Rendered Dragon Scene (Ray Marching, Forward Rendering)

Featured

This is a quick run down on my Advanced Rendering coursework submission. It uses my own renderer using C++ and DirectX 11. Below I’ll basically post the report contents that I submitted with the code which details how each effect has been implemented.

Effect Descriptions

Effect 1: Chamber Room Environment

The chamber walls and ceilings were ray traced by ray marching implicit geometry using distance functions.

The walls and ceiling are done inside the pixel shader on an screen sized quad. I then perform a second ray tracing pass for the interior pillar geometry. I did this in a separate pass in order to be able to blend the geometry in the correct order i.e. the pillars needed to sit on top of the forward rendered floor which meant I would need to render first the walls, then the floor  and finally the pillars. The hybrid ray tracing and forward rendering passes were combined in the scene using blending.

The structure is comprised of 4 large radius spheres for efficiency. The texture and bump-mapping effect is done via ray tracing a texture lookup and modifying the distance function to adjust the intersection point on the ray based on the texture sample.

All lighting in the program is done based on the ‘Blinn-Phong’ reflection model.

Effect 2: Animated Dragon

The dragon is a forward rendered basic mesh model with texture-mapping and shading. The dragon is animated via the vertex shader performing multiple different motions of local body parts. The tail sways up and down and the neck and head move gently but differently from each other. Breathing was also emulated on the dragon’s torso and throat.

The animation aims to give the impression of a living, breathing creature guarding its treasure horde. The animations themselves were performed by passing in a timer value to the vertex shader and using ‘smooth step’ functions of time, sine and cosine.

Normal bump-mapping is also implemented using a separate normal map texture.

Effect 3: Four Bumpy Stone Pillars

Similar to the walls and ceiling, a separate ray tracing pass was done for the stone pillars. Four capped cylinders were defined using distance functions. The parallax bump -mapping was done in the same way as before.

Effect 4: Geometry Shader-based Particle Systems

9

Both fire and smoke particle system effects were created on the GPU using the geometry shader. The systems are created from a base mesh model of a cone  (procedurally generated). Each cone vertex is input individually into the geometry shader which then creates an additional 3 vertices to form a quad, effectively transforming the cone into a quad array. The resultant quad is bill-boarded to ensure it is always facing to the camera.

The particle systems are animated using functions of time, sine and cosine inside the vertex shader.  The fire system uses additive blending. The smoke particles use an alpha fade to make them appear transparent.

The centre fire can be toggled to show the original preserved shape using the  ‘FireShape’ UI variable.

A mesh model of a wall torch was used to contain the fire and smoke particle systems for each pillar. The torch is forward rendered and features normal bump-mapping. An additional central fire inside a torus brazier was also added.

Effect 5: A Procedural Bumpy Floor

The floor is made from a single quad primitive input into the tessellation  stage of the shader pipeline (hull and domain shaders). The quad is tessellated in a triangle domain using a variety of partitioning methods changeable via the UI. The domain shader also perturbs the height of the floor using a ‘smooth step’ function based on the coordinate of the tessellated triangle patch, sine and cosine.  The normals are also recalculated by processing two adjacent positions with the same function, calculating a slope for each and normalizing them.

View dependent tessellation is implemented inside the hull shader based on the camera distance from the floor plane. The closer the camera is, the more triangles are tessellated.

Effect 6: Ellipsoid and Torus using Tessellation Shaders

Both the dragon egg and brazier are made from single points that are input into the pipeline and converted inside the domain shader using parametric representations of an ellipsoid and a torus. This is done by ‘wrapping’ the patch UV coordinate space around the respective shape.

Effect 7: Dragon Tail Spikes

18

The dragon tail spikes were created inside the geometry shader by calculating  a single new centroid vertex and utilising the existing vertices to form three new triangles faces. The effect was localised to just the tail using the world position of the vertices.

Extra Features:

Extra features include a strong wooden door made by texturing and bump mapping a quad. I also added some precious gem stones to the floor made the same way as the egg (parametric ellipsoid) but tessellated much less to make them look more geometric.

4

This coursework took my in the region of 2-3 weeks including research and learning the more advanced shader pipeline stages such as hardware tessellation and geometry shaders. Blending the scene components together was quite a headache and there are some noticeable blocky bits around the particle systems when they over lap caused by some issues I had blending everything together. Despite this it was a great learning opportunity for some of the more advanced forward rendering techniques and luckily my past experience with ray tracing helped a great deal. In the end I received a mark of 96% for it.

Falling Object Simulator – Simulation & Concurrency

Featured

 

As part of my MSC Computer Science degree, the Simulation & Concurrency module was probably the most intense module on the course, tasking us to produce a physics engine from scratch with robust network and multi-threading integration in order to implement a simulation of balls falling into a box, with removable trays and also a cloth simulated net.

Having done only a little previous physics programming for ‘The Column’, I set about researching the topic since implementing a solid and robust physics engine is no trivial task, even without a networking element. Although I found several good sources, for specific elements, Ian Millington’s ‘Game Physics Engine Development’ was an excellent book that covered many aspects of getting a basic physics engine up and running. I promptly devoured about a third of the book during this project though it lacks any real depth on collision detection and doesn’t real cover cloth simulation as I recall.

In the end I received 87% for the ACW which I’m pleased with. With more time I would have implemented rigid body motion but this second semester of the MSc has been pretty insane in terms of work load, mainly due to the fact that the UK carries out MSc degrees in a single year, rather then 2 like everywhere else in the world! Additionally, the University of Hull’s MSc degree is extremely practical, which although I find preferable to more theoretical based degrees (how better to learn then via implementation?) does result in a heavy work load. The good side is that if you put the work in, your get an extensive portfolio at the end of the degree.

Project Description:

The result of the project was a multi-threaded interactive falling object simulator developed from scratch using C++ and DirectX 11. The physics engine is a mass aggregate system using particles i.e. no rigid body motion. It features simple sphere and plane based collision detection and interpenetration resolution.

Each tray features different friction and elasticity attributes as per the specification.

An advanced feature is the cloth simulation for the net made using a lattice of spring constraints (Hook’s law) with four anchored corners.

Net collision detection is made using small spheres mapped to the vertices of the net, this however means I had to make the springs quite rigid to stop balls from forcing their way in-between the vertices hence the cloth is not very fluid or fluttery.

Without rigid body dynamics to get the cube rotations I used rod constraints connected to each vertex of the cube. This is a simple way to get rotations using just particles.

Rendering and physics integration are performed on separate threads, with an additional 3 threads for handling network. Rendering frame rate is hence independent from the simulation and both can be changed to run at a specific target rate.

Not shown in the video, but being a significant part of the project is the peer-to-peer network aspect. The program can be run on 2 peers, each peer will communicate and synchronise the simulation using linear interpolation of the scenes physics data. Each peer can be interacted with i.e. camera can be moved independently (think multiplayer) and commands such as open/close tray and spawn ball is communicated across the network to each peer. Network coding was done using Winsock. UDP broadcasting was used purely for peer detection and TCP for data transmission. Packet loss and latency resilience was also implemented.