OpenGL Cross-platform PC/PSP Game Coursework

Last semester as part of the Advanced Graphics module of my CS degree at Hull University, we were tasked with a group project to produce a cross-platform OpenGL mini-game for the PC and Sony PSP based on a specification. The game premise was to move around a 3D ‘maze’ consisting of four rooms and connecting corridors, avoiding a patrolling AI that would shoot you if within its line of sight. The objective was to collect 3 keys to activate a portal to escape and beat the game.

The groups were selected at complete random with 4 members. As per usual, group coursework assignments are particularly difficult due to the extra concerns of motivating members and assigning work and by year 3 of University, you get a good idea on the best way of operating within them to secure good grades. I went in with the mindset of doing as much work as possible after we assigned tasks. Hopefully each would carry out their allocated work, if not, I’d just go ahead and do it, no fuss. Luckily one chap in my group was a friend and he did an excellent job coding the AI, mini-map and sound while I worked on coding the geometry, camera, lighting and player functionality etc.

1

Mini-maze model

Static environment lighting

Static environment lighting

Cross-Platform Limitations:

Having worked with OpenGL and shaders last year for my 3D ‘The Column‘ project, it was some-what limiting when I realised that the PSP didn’t support them and that fragment-based lighting was a no go. With one requirement of the game being a torchlight effect that illuminated the geometry, this would therefore mean that for PSP compatibility, vertex-based lighting would need to be implemented and that meant tessellation of primitives to prevent the lighting looking very blocky and…well very 90’s. Luckily the PSP did atleast have support for VBO (Vertex Buffer Objects) which meant effectively each tessellated model could loaded onto the graphics card only once to improve performance.

Unified Code

An interesting aspect of this project was the required consideration for a consolidated code-base that where possible allowed shared functionality for both the PC and PSP platforms i.e limiting how much platform specific code was used. This was essential since the game would be a single C++ Solution for both platforms.

I designed the code structure based around principles Darren McKie (the course lecturer) described, and produced the following class diagram that reflects the final structure:

Unified Cross-platform Class Diagram

Unified Cross-platform Class Diagram

The majority of game code resides in ‘Common Code’ classes that are instantiated by each particular platform ‘Game’ object. Certain code such as API rendering calls were kept platform specific but made use of the common classes where necessary. A particular nice way of ensuring the correct platform specific object was instantiated was carried out using ‘#Ifdef’, ‘#ifndef’ preprocessor statements and handled by a ‘ResourceManager’ class.

As mentioned earlier, per-vertex lighting had to be implemented due to PSP compatibility. A primitive with a low number of vertices would thus result in very blocky lighting. To prevent this I created a tessellation function that subdivided each primitives vertices into many more triangles. I played around with the tessellation depth to find how many iterations of subdivision could be achieved before inducing lag and was very happy with the lighting result considering there is no fragment shader; a given for today’s modern pipelined-based rendering.

Active Portal

Active Portal

The PSP implementation proved more tricky due to getting to grips with the PSP SDK and having access to very little documentation, however the game was successfully implemented onto a PSP device and ran with decent performance after compressing the textures down and removing geometry tessellation to allow for the PSP’s limited memory capacity.

The game was written in C++ and  the following libraries and software were used:

  • GXBase OpenGL API
  • Sony PSP SDK
  • OpenAL
  • Visual Studio 2012
  • Paint .Net

Solar System Orrery – HTML5 Canvas

Featured

Orrery Zoom

For quite a while I’ve been trying to get around to arranging some web hosting and putting my solar system Orrery online for people to access, I’m pleased to say I’ve finally got around to doing it.

(Click here to go to the Interactive Orrery)

The project was part of the 2D Graphics module course work for my Computer Science degree. It’s written in Javascript and utilizes the powerful HTML5 canvas for rendering.

It’s not an accurate scientific representation, however the planets distances are to scale in relation to each other (not in relation to the sun) and the frequency each planet completes a full orbit (year) is also accurate to real life. There are two orbit modes ‘circular’ and ‘elliptical’ and also two simulation modes where acceleration and velocity is calculated based on the mass each object and thus the force of gravity. One simulation mode keeps the Sun centered while the planets orbit around, the second mode allows the sun to be affected by it’s orbiting bodies.

elliptical

It’s really a bit of fun and you can create new planets of enormous size by simply holding down your mouse on the simulation until your happy with the size and let go and watch how all orbiting bodies are affected. You can also flick the planet when your holding it at the same time of release to set its starting velocity (seems to work much better in chrome then IE). I also highly recommend running it in full-screen mode by pressing ‘W’ if you have a reasonable spec system.

Another cool thing is the zoom feature, if you pause the program via ‘P’ you can scroll around with the cursor keys and take a look at some of the relatively hi-res images I used for each planet. The Earth and orbiting Moon is pretty cool to zoom right into as pictured above.

Detailed instructions are available on the page. Please check it out here and have a play around: www.alexrodgers.co.uk/orrery

simulation

Exchange Reports Project Overview

During this summer, in-between semesters I was fortunate enough to get a software development job for a local company just 10 minutes walk from my door. The project was to produce an ‘Exchange Reports’ system that would provide email messaging statistics exactly to the customers specification. The system would be automated so that after reports were designed, they would be generated programmatically by a service and emailed to any recipients that had been setup to receive each report. The solution was to be comprised of 3 distinct programs that would need to be developed along with configuration tools to setup the non-GUI processes in the solution (namely the services).

I have produced the following diagram to demonstrate the solutions processes, ( Click to enlarge):

The design was in place when I started and an existing code-base was also present, but still required the vast majority of the functionality to be added. It was the first time having worked professionally as a software engineer and therefore also the first time getting to grips with existing code made by developers no longer around. More so, understanding the solutions technical proposal well enough to execute exactly what the customer and my employer wanted. I think working in IT professionally for a lot of years certainly helped me get into a comfortable stride after an initial information overload when taking on solely what was a surprisingly large but beneficial technical project compared to what I had envisioned. Being thrown into the deep end is probably the fastest way you can improve and I feel that above all, I have taken a lot from this experience which will prove valuable in the future. I’m very pleased with the outcome and successfully got all the core functionality in and finished in the time frame that was assigned. I whole heartily would encourage students thinking of getting professional experience to go for it, ideally with an established company from which you can learn a great deal. Having experienced developers around to run things by is great way to improve.

Now onto the technical details. The project was coded in C# and used WinForms for initial testing of processes and later for the configuration programs. I used a set of third-party .NET development tools from ‘DevExpress’ that proved to be fantastic and a massive boon to those wanting to create quick and great looking UI’s with reporting functionality. SQL Server provided the relational database functionality, an experience I found very positive and very much enjoyed the power of Query Language when it came to manipulating data via .NET data tables, data adapters, table joins or just simple direct commands.

Using the diagram as a reference, I’ll briefly go through each process in the solution for A) those interested in such things and B) future reference for myself while it’s still fresh in my mind because i’ll likely forget much of how the system works after a few months of 3D graphics programming and Uni coursework :P.

Exchange Message Logs: 

In Exchange 2010 Message Tracking logs can be enabled quite simply and provide a wealth of information that can be used for analysis and reporting if so desired. They come in the form of comma delimited log files that can be opened simply with a text editor. They have been around a lot of years and in the past during IT support work I have found myself looking at them from time to time to diagnose various issues. This time I’d be using them as the source of data for a whole reporting system. The customer was a large international company and to give an example from just one Exchange system they were producing 40 MB-worth of these messaging logs each day. With these being effectively just text files that’s an awful lot of email data to deal with.

Processing Service: 

The first of 3 core components of the solution, the Processing Service as the name suggests is an install-able Windows Service that resides on a server with access to the Exchange Messaging log files. The service is coded to run daily at a specified time and it’s purpose is comprised of 5 stages:

1. Connect to the Exchange server and retrieve a list of users from the Global Address List (GAL). This is done using a third-party Outlook library called ‘Redemption’ that enables this information to be extracted and then check it for any changes to existing users and/or any new users. The users are placed in a table on the SQL database server and will be used later to provide full name and department information for each email message we store.

2. Next, each Exchange Message log is individually parsed and useful messaging information is extracted and stored into various tables on the database server. Parsed log file names are kept track of in the database  to prevent reading logs more than once.

3. Any message forwards or replies are identified and tallied up.

4. A separate Summary table on the database is populated with data processed from the prior mentioned message tables. This table is what the reports will look at to generate data. Various calculations are made such as time difference between an email being received and then forwarded or replied to gauge estimates of response times being just one example; a whole plethora of fields are populated in this table, much more than could comfortably fit on a single report. Due to this large amount of potentially desirable data we later allow the user to select which fields they want from the Summary table in the ‘Report Manager’ if they wish to create a custom report or alternatively and more typically, they use predefined database ‘Views’ that have been created for them based on the customers specification which allows them to access only the data they need. Database Views are a really neat feature.

5. The databases Messaging tables are scoured for old records beyond a threshold period and deleted. This maintenance is essential to prevent table sizes growing too large. Their associated Summary data that has been generated is still kept however but I added functionality to archive this by serializing this data off and deleting it from the database if required.

Report Manager:

Initially we had thought to utilise DevExpress’s ‘Data Grid’ object controls in a custom Form application but we decided that the appearance of the reports that were generated from this were not satisfactory. This turned out to be a good design decision since we later discovered DevExpress has remarkable reporting controls that allow very powerful design and presentation features that completely overshadowed that of the Data Grids. After some migrating of code from the old ‘Report Manager’ program and having to spend a day or two researching and familiarising myself with the DevExpress API I had a great looking new application that the customer will be using to design and manage the reports.

Report Manager program

Report Manager program

The Report Manager allows you to design every aspect of a report through an intuitive drag and drop interface. Images and various graphics can also be added to beautify the design, though that wasn’t something I did nor had the time to attempt! The data objects can be arranged as desired and the ‘data source’ information for the report is saved along with it’s design layout via a neat serialization function inherent to the ‘XtraReport’ object in the DevExpress library which is then stored in a reports table on the database server for later loading or building. You can also generate the report on-the-fly and export it into various formats such as PDF or simply print it. Another neat built-in feature is the ability to issue SQL query commands using a user-friendly filter for non-developers in the report designer which is then stored along with the layout, thus the user designing the report has absolute control over the data i.e a quick filter based on Department being “Customer Services” would return only that related message data without me needing to code in some method to do this manually like was the case when using the Data Grids.

In the top left you’ll see specific icons that provide the necessary plumbing for the database server. ‘Save’, ‘Save As’ and ‘Load’ respectively writes the serialized report layout to the database, creates a new record with said layout or loads an existing saved report from the database into the designer. Loading is achieved by retrieving the list of report records stored in the reports table and placing it into a Data Grid control on a form where you can select a report to load or delete. The ‘Recipients’ button brings up the interface to manage adding users who want to receive the report by email, this retrieves the user data imported by the Processing Service and populates a control that allows you to search through and select a user or manually type a name and email address to add a custom recipient. Additionally, upon adding a recipient to the report you must select whether they wish to receive the report on a daily, weekly or monthly basis. This information is then stored in the aptly named recipient table and then relates to the reports via a reportID field.

Report Service:

Nearly there (if you’ve made it this far well done), the last piece in the solution is another Windows Service called the ‘Report Service’. This program sits and waits to run as per a schedule that can be determined by a configuration app that i’ll mention shortly. Like the Processing Service, as part of it’s logic, it needs to check if it’s the right time of the day to execute the program, of course the service continuously polls itself every few minutes to see if this is the case. Upon running it looks to see if it’s the right day for daily reports, day of week for weekly reports, or day of month for the (you guessed it) monthly reports. If it is, it then it runs and grabs the ‘joined’ data from the reports and recipient tables and proceeds to build each report and fire them out as PDF email attachments to the associated recipients. It makes a final note of the last time it ran to prevent it repeatedly running on each valid day.

Configuration Tools:

Two configuration apps were made, one for the Processing Service and one for the Report Service. These two services have no interfaces since they run silently in the background, so I provided a method via an XML settings file and the two apps to store a variety of important data such as SQL connection strings, server authentication details (encrypted) and additionally also through the need to provide certain manual debugging options that may need to be executed as well as providing an interface to set both services run times and the report delivery schedule.

Screens below (click to enlarge):

So that’s the solution start to finish, depending on time I’m told it’s possible it could be turned into a product at some point which would be great since other customers could potentially benefit from it too.

The great thing about a creative industry like programming, whether business or games, is that you’re ultimately creating a product for someone to use. It’s nice to know people somewhere will be getting use and function out of something you have made and just one reason why I’ve thoroughly enjoyed working on the project. I’ve learned a lot from my colleagues while working on it and hope to work with them again. You also get a taste for real life professional development and how it differs in various ways to academic teachings, which although are very logical and sensible are also idealistic (and rightly so) but in the real-world when time is money and you need to turn around projects to sustain the ebb and flow of business, you have to do things in a realistic fashion that might mean cutting some corners when it comes to programming or software design disciplines. I always try my best to write as clean code as possible and this was no exception but ultimately you need to the get the project done first and foremost and it’s interesting how that can alter the way software development pans out with regards perhaps to niceties like extensive documentation, ‘Use Case’ diagrams and robust unit testing potentially falling to to the wayside in favor of a more speedy short-term turn around. Certainly I imagine, larger businesses can afford to manage these extra processes to great effect, but for small teams of developers it’s not always realistic, which I can now understand.

The Column: 3D Graphics Simulation

Featured

As the single fully weighted piece of work for the 3D Graphics module during my second year of my Computer Science degree at Hull University I had to create an OpenGL graphics simulation. Despite having had little prior experience of using 3D graphics frameworks, I am very pleased with the outcome and look forward to continuing to spend a lot more time with both the OpenGL and DirectX API’s; in particular my final year project looks to be a ray-tracing renderer (potentially CUDA) which should give me additional exposure to what is becoming a more and more promising technology for gaming.

I created a report accompanying the finished program which I’ll simply include bits of below to explain the project and how the simulation works.

The Column

The Column

The Column is a 3D graphics simulation designed around a series of stacked boxes containing cylinders. Balls are emitted at the top of the stack and interact with both the geometry and each other via way of collisions and response. In addition, the simulation features a “Sphere of Doom”, a large sphere near the bottom of the stack that absorbs balls, shrinking their size and mass. A portal lies at the bottom of the stack that transports any balls that enter, back to the top of the column. The entire simulation is made using OpenTK (OpenGL) in C#. All geometry and physics are rendered mathematically.

The specification determined that one emitter should emit balls with the approx density of aluminium, the second one, copper and the third, gold.

The program simulates a dynamic system through various means. The balls use an Euler integration method with a gravitational constant that combined with calculated velocity, mass and density of each ball, simulates the motion of the balls falling down the column.

Ball to ball collision response is handled via “elastic collisions” based on the mass of balls and perpendicular velocities from the collision point, thus a heavier ball will knock a lighter ball out of the way. Additionally the angle of impact effects the amount of force transferred.

Rendering is performed via OpenGL using version 3.1 and Vertex Buffer objects. All primitive 3D models have been constructed manually or mathematically. I use GLSL vertex and fragment shaders for “Phong Shading” based ambient, diffuse and specular lighting calculations that provide interpolated lighting of geometry between vertices. My scene uses 3 point light sources and has built in support for both directional and spot lights if desired.

I have implemented a particle system object that emits particles of a given shape. I have used simple quad planes for the simulation for performance optimisation and rotate them for added effect combined with the lighting. The particles are highly customisable in lifetime, movement, scale and quantity and can be added for any desired event. I use them specifically for collisions with the Sphere of Doom and upon spawning of balls from emitters.

My portals use a Frame Buffer Object which renders the scene from the desired camera position to a texture. I then switch to the Display Frame Buffer and render the entrance and exit portals using the respective textures to give the effect of seeing through the portals to their destination, which in turn is updated in real-time.

Bottom-Up

I have spent considerable time optimising the simulation to maximise the overall frame rate. Much of this has been achieved by streamlining the shader structure to avoid dynamic branching, specifically with the avoidance of “IF” statements , the use of step functions and moving as many calculations as possible to the vertex shader. The fragment lighting calculations are easily the most intensive part of the simulation and reducing my lights to a maximum of 3 per fragment has also helped greatly.

With a simulation such as this, there is always something that could be improved on, tweaked, optimised or added. Suffice to say I am very satisfied however with the quality of the finished product which has more than surpassed my initial expectations and I feel I have learned very useful and contemporary skills that will be essential for the future. Perhaps most importantly, I have thoroughly enjoyed the assignment.

I’ll get a video uploaded of it in motion at some point. I’m currently looking at improving my portals a bit by potentially using an asymmetric frustrum.

Sphere of Doom

Hypermorph Wins Three Thing Game Competition

So it’s been a frantic couple of weeks, plenty of course-work to do and last weekend was the much anticipated Three Thing Game competition. For anyone not in the know this is held each semester at Hull University and challenges teams to come up with a game based around three auctioned words per team. Judges then score based on the games relevance to the words and the quality/fun of the game. The competition involves a marathon 24 hour programming session to get your game finished on the day. This one was the biggest yet with 39 teams competing. We really couldn’t have asked for better “Things” because a combination of good bidding and luck meant we came out with “Flying”, “Tank” and “Bombs”. Considering another team got “Teddy bear”,  “Deodorant”  and “Pop Tart” I think we did ok!

Last year we came second with Shear Carnage and and I can say that honestly this year, we really really wanted to win it. This was evident to myself just by the focus we had this year and when the day of the competition came, I think I probably left my seat half a dozen times in the whole 24 hours! In hindsight we probably took it far too seriously and as a result I think it sacrificed a lot of the enjoyment of the competition and resulted in some contention regarding ideas that seemed inevitable considering vested interests and no one leader within the team. I think on a personal note, much was learnt regarding team work and there are aspects of the planning and design process I would do differently next time. Luckily it all turned out worth it in the end and so it’s very hard to regret any decisions, but this was by no means a painless endeavour!

Me on the right, Russ in the middle, John on the left. Lee Stott at the back.

So to the game, Hypermorph is a retro-style side scrolling shooter that takes me back to my childhood days, playing classics such as Xenon 2, R-Type and Menace on the Amiga. Back then the shoot’em’up was a staple video game genre and was hugely popular, now only since the mobile platforms have taken off is the genre again feasible because it’s the perfect style of game to have a quick blast on when wanting to pass a little bit of time. The  thing that’s pretty novel in Hypermorph is the ability for the player to switch between two different forms, a spaceship and a hover tank by simply tapping the screen. We made the game using XNA (C#) for the Windows Phone 7 and coded everything ourselves (no third party libraries).

I produced the art for the game and managing both the art and doing a lot of the programming was a challenge in itself on the day, resulting in most of the art being done in the last few hours. I had a good idea in my head what the game would look like when we were bouncing the initial idea around, however my regret was that I didn’t produce any concept art for it sooner to put the rest of the team at ease; for a long time I think we were left with our own ideas for how the game would look but once I came up with the first concept drawing for the ship, the team were all in favour to my relief!

We had decided to make the game quite dark and moody but with bright weapon and explosion effects to make them really stand out. Additionally, we wanted to make the controls as hands off as possible. We learned from Shear Carnage that using touch too frequently can result in obscuring a lot of the screen so we instead went for a tilt based movement for the player and a single touch to morph between Tank and Spaceship. Importantly we set it to auto-fire constantly since you soon realise that in this genre there’s never a time you don’t want to be firing.

One feature I’m really pleased we put in was the voice effects for powerups and various other things. It adds a lot to the immersion and again, really goes back to the genres roots.

Of course we have plans to get Hypermorph out on both the WP7 and Windows 8 market ASAP but uni coursework is currently being prioritised. At the competition was Lee Stott from Microsoft and guys from the Monogame team. Lee’s encouragement was inspiring and I’d also like to thank him and Microsoft for providing the cool prizes. The Monogame guys were brilliant and we spent a fair time chatting with them regarding getting our games ported to the various platforms, they even ported Shear Carnage and my Robocleaner game for us to show us how easy it is! (albeit there’s some coding required to get them ready for the marketplace).

Ultimately we are going to want to put in a few more levels, enemy types, weapons and powerups before getting it on the marketplace, but the good news is it will most certainly be free!

All in all it was overwhelming and the encouragement we have received from Lee Stott, Rob Miles and the MonoGame guys was great. Ultimately this is why I gave up a career in IT to get into the games industry, because there’s so much satisfaction in putting your heart and soul into producing a game and then seeing others get a lot enjoyment from it. Winning the Peoples Choice award as well as the judges award was the icing on the cake and I’d like to thank everyone who voted for us and gave us great feedback.

Stay tuned for more Hypermorph news soon…

C++ word wrap for console output – Tutorial

I’ve been tinkering in C++ and decided to start making an old fashioned “text adventure” game as a nice little project to grasp the language.

As I started writing the game it quickly became clear that formatting all the strings manually with “new line” characters (n) to get the text to not breakup mid-sentence in the console was going to be hassle. I fancied a challenge and rather than grabbing some pre-written code from the web (how not to learn anything) I decided to work on an algorithm and make my own function that can deal with any string size and wrap the text neatly to the console.

I thought I’d make this little tutorial for anyone who wants to use it in their program but also wants to understand how the code works.

Here’s the function:

Note: Functionality has been gradually added, the updates further down detail the changes. I’ve included a Visual Studio code solution zip with the latest code at the bottom of this page.

Although the function is written here in C++, the algorithm can easily be applied to C# and other languages with just a few syntax changes. I was in two minds whether to go a route of splitting a string at the end of the line and then adding a newline character on the end, then “glueing” the strings together and repeating for each line, or whether to do it the way I did it which was by finding the last character on each line and then looping back through the line until it gets to a space and simply inserting “whitespace” on the end.

I’ll go through exactly what the function is doing:

Here we are preparing to loop through each character in the string to find where we want the line breaks.

This bit of modulo arithmetic checks the current loop iteration value to see if it is a multiple of the console window buffer width (80 default) and thus the character in the strings length that would be at the end of a line. By checking if it’s a multiple it allows us to apply this function to any size string. You’ll need to define BUFFER_SIZE in your program or simply replace it with the number value you want i.e 80.

BUFFER_SIZE could easily be changed via a variable and set to whatever buffer width your console window is using (see bottom of this article on how to get the current console buffer width value).

I initialise this variable for later use to keep track of the number of characters the loop has backtracked through in the string to find a space. (We’ll need to insert this number of whitespace into the string).

Here we are establishing whether the character at the end of the line is already a space. If it is, we don’t need to do anything and it’ll skip to the next loop iteration. Note “(i-1)” here, we do this because “i” is looking at the string based on it’s length with the lowest possible length obviously being 1. However when it comes to looking at the character array of the string “s”, arrays in C languages start at 0, thus we need to subtract 1 from the total length of the string to balance it. I could have adjusted the “for” loop to start at 0 instead and end at (s.length()-1) however this would have meant adjusting the modulo statement and personally it made more sense to me this way.

Once we have the character at the end of a line in “i” and know it’s not already a space, we loop backwards from this position in the string until we find a space i.e the end of a word.

As stated above, we check if each character we loop through is a space (‘ ‘). If it is then we have found the end of the last whole word on the line that fits and thus where we need to insert whitespace. We know how many spaces to insert because each time a character is checked and isn’t a space, we increment the “spaceCount” variable so we know how many spaces to insert to fill the line to the end.

We then output the newly word wrapped string to the console!

It’s great for text adventures or any program that outputs large strings since you can just call this function with any size string (within memory limits!) and it’ll do the rest. I tried to keep the solution as neat and minimal as I could.

Update (14/08/2012): Retrieving the console buffer width value

I thought I’d add in addition a way to get the console buffer width from the console each time text is output. This allows users to change the console window size and the text will wrap to it on the next output.

As you see here I have added at the top a function call “GetBufferWidth()”. This will return an integer with a value of the currently set console buffer width and store it in the bufferWidth variable that the OutputText function uses.

The code for GetBufferWidth is here:

It’s a relatively simple function that makes use of the CONSOLE_SCREEN_BUFFER_INFO class. You will need the ensure you have “#include” for windows.h specifically for the “wincon.h” child library containing the console services.

“dwsize.X” gives us the max number of available character “columns” in the window (by default 80), Note* “dwsize.Y” isn’t needed here but would give us the max number of available character lines for the window so by default it would return a value of 300 since that’s the default limit for console output (it would not give us the value of lines within the visible portion of the window).

Update (20/10/2012): Wrapping string text with newline characters (paragraphs)

I recently had a comment mentioning the problem with using this algorithm with paragraphed text. I had already come across this issue and had added some additional code to allow it to work with strings with “n” (newline) characters in. I’ve been meaning to do a blog post update with it for quite a while. I appreciate comments from people because it means people are using the code I’ve put on here and gives me an incentive to update this post so thanks.

Below is the full modified code. As can be seen from line 11, I have added a new block of code to check for any “n” characters (you could extract this into a separate function for neatness). If a “n” is found it inserts spaces into the string before the “n” character filling to the end of the line and jumps the loop iteration to the first character on the next line. I’ve also added a new char declaration on line 7.

By adding whitespace before the “n” character and thus moving it right to the end of the line, we are effectively countering all that extra space that the console window needs to generate when it hits a newline character. This would completely mess up our algorithms positioning in the console window if it wasn’t at the end of the line.

In detail:

Here if the current character in the string array is a “n” then it enters the new block. It then declares a new temporary variable for use later that contains the characters position within the line (defined by the width of your console window).

Next we calculate the number of spaces we will need to insert into the string from the “n” characters position to move it to the end of the line. We do this by finding the difference between the position of the current character in the line and the last position on the line.

We then insert just before the “n” character the calculated number of spaces.

Then we increment the current loop iteration by the number of spaces we just added which should set the loop index to the first character on the next line and “continue;” which will take us straight to execute the next loop iteration, repeating the process and checking every subsequent character for any more line breaks. If it doesn’t find one, it carries on as normal.

Here’s a screenshot of it in action with double “/n/n” line breaks inserted randomly into the string:

 

Download link for latest version of code (Visual Studio Solution):

461 Downloads

C# / XNA: Passing Class instances (objects)

So, not being an expert I was trying to find improved ways of managing external class member access  without resorting to “public static” and ideally wanted an object orientated method, a while ago we were told by Rob Miles how classes or specifically instances of classes (objects) are passed by reference. I understood this however I didn’t really join the dots together for one useful implication it has::

Passing objects as parameters allows the receiving Method and potentially it’s Class, access to the objects members directly. (since its a memory reference to that object, not a copy).

It maybe obvious to some but ultimately,  it means you don’t need to keep feeding objects in as parameters for update methods to show changes, you can just do it once and store it there as a class member. The objects members will all be directly accessible from within that class, even if they are changed elsewhere in your program/game.

I think the thing that didn’t make this obvious to me is the fact that if you pass a class instance member variable instead of the instance itself, this behaviour doesn’t work, you just end up with a copy of the member variable at the point of being passed, but that’s because it’s been passed by value, not reference. Additionally, I found myself repeatedly passing object instances into Update methods, which worked but simply wasn’t needed since a ref could just be passed once and stored for the class. In the context of “Sweepy Cleaner” our Uni course work project, the below highly contrived examples highlight what I mean:

Pass by value Example:

Pass by Reference Example:

The other obvious good thing about passing the class instances instead of specific member variables is that you can of course access all members of that class instead of just the ones you pass. Additionally, it gets around the need to use public static for easy but rather lazy access of class members from outside the class, because only the classes you pass the instance to will be able to access it’s public members, and if you combine with “accessor”/”getter” methods I guess they don’t even need to be public.

(I’ve heard bad things about getter/setters too because it goes against the object orientated nature of C# apparently, but I’m not too bothered about that at this stage).

Anyway, you can of course still pass individual members via the “ref” keyword in the parameter and arguments to achieve the same thing as above but just for a member. I personally think for XNA passing the instance just seems a sound way of doing things. I’m not sure if there are repercussions to doing it like that, but so far it seems sound and hope this may help others who also hadn’t realised its usefulness.