Optimized Game Programming - DirectX 12 Project

Optimized Game Programming (DirectX 12)

Demons Maze

This project was the final assignment of the module Optimized Game Programming and was developed by one person. The project involves a game prototype using DirectX as its main graphic API on which we have developed a framework that eases the loading, management and creation of different graphic resources.

The example D3D12TextureQuad offered by AMD has been used as an starting point for this project


The prototype is a simple maze-inspired game in which the player has to find all the keys to open an exit portal located at the end of the maze. Along the maze, the player will find monsters that will kill the player when overlapping, forcing the player to respawn at the beginning of the maze.

The keys obtained by the player, as well as other extra information about the state of the game, are shown on a very simple UI.

  • First-person game where the player has to obtain some objects to complete a goal
  • Camera movement animation
  • Static scene/environment
  • Basic collision and overlap detection
  • Dynamic enemies
  • Instantiated rendering
  • Different shaders for different elements of the game
  • Level loading from a text file (easily modifiable)
  • Level file
In order to facilitate the creation of levels for the prototype, a custom text format has been used, which allows the definition of different elements of the level:
  • Static scene pieces (tiles)
  • Enemy location & movement
  • Player starting and finish point
  • Pickup location
For example, to define the level layout, we use the following format where some characters (described in the file) are used to represent specific level tiles.

//"T" "L" "E" "J" dead end

//"-" "|"corridor

// "q""d" "p" "b" corner

// "^" "<" ">" "_" side wall

// Corridor with 2 dead ends


5 5







For the definition of player start and end the following format is used:

// X, Y, Rotation


2 1 0



2 3 0


This file generates the following level:

The pickup positions, as well as the enemies, are defined using the following format:

// X, Y


3 5

5 5


// X, Y


4 3

8 4

12 8


All game objects: tiles, pickups, and enemies. Use instanced rendering.

During the level loading, the level file is parsed and all the needed buffers get created in order to store the instances transformation.

The UI elements are created from code and are rendered (for now) as independent elements. Ideally, UI elements should also be rendered as instanced and should have instance properties pointing to the right texture to use and color.

The following image shows the rendering process analized in RenderDoc.



Corridor tiles

No-wall tiles

Corner tiles

One-wall tiles


7 UI draw calls

As can be seen in the previous analysis, the objects outside the frustum are being rendered as instanced objects, since they all share the same MeshRenderer. This is not ideal. A possible solution might be to implement frustum culling and update the instance buffer before sending this data to the backend

We can see as well, how not using instance rendering for the UI elements is causing an unnecessary increment in the number of draw calls. This could be optimized as explained above.


One of the personal goals of the assignment (it wasn't required), was to be able to abstract the creation and management of DirectX resources from the game code. Facilitating this way the implementation of other graphic APIs in the future.

For that purpose, something similar to the renderer on Doom 3 has been implemented. This renderer is split between a FrontendRenderer and a BackendRenderer.

Our framework, contains functionality that allows the composition of scenes in a simple and fast manner. Using Unity-like coding and naming.

For example, for the creation of a simple scene with a cube, the following code could be used:

Resources Creation

When a scene is created, the game resources are added automatically to an object called SceneResourcesDesc, which contains information about the textures, meshes, shaders, and buffers that are involved in the scene.

When the scene starts, the object SceneResourcesDesc is sent to the BackendRenderer which creates the resources needed natively by the API (DX12 in this case) and those get stored using a unique id.

For this demo, the ids are assigned manually by the programmer, but in a real production environment, it would make sense to replace this with an automatic guid generator.

For example, the main camera, which is created internally by the class Scene. Adds a CBuffer which will contain the camera properties such as the view and projection matrices.

If we added a breakpoint to the function LoadResources of our BackendRenderer (DX12Renderer). We could analize the elements that will be loaded for that scene:

In this case, 2 CBuffers have been added for the 3D transformations of the main camera and the cube. Also, 2 extra buffers for the camera properties (main camera and UI camera).

Resources Rendering

The class FrontendRenderer is in charge of getting the information needed to render the scene. Is in this class where we could perform operations such as Occlusion Culling. In this step, an object called FrameGraph containing all the scene information is generated.

This FrameGraph object is then sent to the BackendRenderer which will perform API-specific operations to render the scene based on the information.

Find below the FrameGraph sent in the last example to the Render backend function.

Similarly to the scene creation in this case, updated data from the camera buffers and meshes are sent for rendering. The FrameGraph mesh objects contain also their associated CBuffer containing the object transformation.

Due to the code separation between NON-API/API, it has been decide to not use DirectXMath, since it would be API-specific code. As an alternative, we have used GLM, which although is traditionaly asociated with OpenGL is an platform-independent library and has no dependencies. Like DXMath, GLM supports SIMD which could be used in perfomance-sensitive parts of the code.

As mentioned above, the framework supports single and instanced rendering.


Since this is a framework of limited scope, there are different self-imposed limitations in order to speed up the scene creation process.

The first limitation is related to the data structure received by the graphic pipeline. Shader properties, as well as object-bound textures, have to always follow the same GPU memory layout structure and there are currently no configuration options for the programmer.

Although there is an option to choose between single or instanced rendering, other pipeline configurations like blending, depth or stencil are currently out of the user's reach.

Ideally, a way of specifying the properties/configuration of the RenderState should be available probably as an external file, which could also contain the shader code as Unity does.

Finally, scene objects hierarchy is currently not supporter, which means are transformations are described in world space.


The backend renderer implemented for this assessment uses DirectX12 as required. All the classes related to this API use the prefix DX12.

DX12Renderer is the only BackEndRenderer subclass in the project and it contains all the essential code to set up the DirectX12 API. This includes:
  • The creation and setup of device and swap chain
  • The creation of command lists
  • The creation of the viewport
  • The setup of render targets

Since it works with the FrontEnd/BackEnd model described above, it also contains methods to process the FrameGraph and all the needed operations to the current command list.

Most of the resource-related code has been moved into its own class. The following classes: DX12Texture, DX12Buffer, DX12MeshBuffer, DX12InstanceBuffer, and DX12RenderState, are meant to remove boilerplate code from the BackendRenderer and work as a handle for these resources in the DX context.

As explained before, most of these classes lack configuration parameters at the moment, which means the user is limited in how and where these resources are allocated, and also how they are bound during runtime.

The class DX12RenderState allocates a pipeline and root signature pointer, defining a render state, which will be bound before rendering a mesh.

At the moment, meshes are rendered as they come ordered in the FrameGraph. It would be better to define some kind of order. Ordering from distance to the camera is a common practice to reduce fill rate, but it would also be a good idea to bundle meshes into renderstates so we can reduce the amount of pipeline/root signature switches.

Render states they all use the same input layout for now, which is as follows:

Although the “COLOR” property is not used at the moment due to obj format limitations, it could be useful in the future.

The system supports compiled and not compiled shaders. Built-in shaders have been compiled and added to the project while the “user shaders” are loaded and built from source code.

The root signature varies depending on instancing support.

For non-instanced render states, the root signature looks as follows:

SRV DescriptorTable (Textures)

CBUFFER (Camera data)

CBUFFER (Object data)

CBUFFER (Shared data)

For instanced render states, the root signature replaces the Object data CBuffer with a shader resource for the StructuredBuffer containing the instance transforms.

For texture sampling, instead of adding them to the descriptor heap, a single static sampler has been defined and added to the signature descriptor.


One of the assignment's requirements was to make use of different patterns commonly used in game development. These are some of the patterns used in this project:


The update of all framework systems is done in the function System::Run

The function Run updates the game time, input and the scene. Finally, the FrameGraph is obtained from the scene and is sent to the backend for rendering.

The framework uses the class SystemTime which calculates the time passed between frames (deltaTime) and is available for the rest of the system in order to get framerate-independant results.


The framework shares some similarities with Unity engine. One of them is the use of the Component pattern.

Each scene object, contains a list of associated components which are updated when the object gets updated inside the function Scene::Update

This pattern, allows the reutilization of different components on different objects and contributes to code modularization.


A simple implementation of the pattern Observer has been done, which allows some parts of the game to listen to the collisions or overlaps detected by the component LevelCollisionComponent.


Finally, the extensively used pattern Singleton has been used to facilitate the access to data of the classes System, Time, Keyboard, and Window.

This project can be found in the following GitHub repository