Book Image

Learning Windows 8 Game Development

By : Michael Quandt
Book Image

Learning Windows 8 Game Development

By: Michael Quandt

Overview of this book

With the recent success of a lot of smaller games, game development is quickly becoming a great field to get in to. Mobile and PC games are on the rise, and having a way to create a game for all types of devices without rewriting everything is a huge benefit for the new Windows 8 operating system. In this book, you will learn how to use cutting-edge technologies like DirectX and tools that will make creating a game easy. This book also allows you to make money by selling your games to the world. Learning Windows 8 Game Development teaches you how to create exciting games for tablets and PC on the Windows 8 platform. Make a game, learn the techniques, and use them to make the games you want to play. Learn about graphics, multiplayer options, how to use the Proximity + Socket APIs to add local multiplayer, how to sell the game outright, and In-App Purchases. Learning Windows 8 Game Development guides you from the start of your journey all the way to developing games for Windows by showing you how to develop a game from scratch and sell it in the store.With Learning Windows 8 Game Development, you will learn how to write the code required to set everything up, get some graphics on screen, and then jump into the fun part of adding gameplay to turn a graphics sample into a proper game. From there, you'll look at how to add awesome features to your game like networking, motion controls, and even take advantage of new Windows 8 features like live tiles and sharing to make your players want to challenge their friends and keep playing. This book wraps up by covering the only way a good game can finish development: by shipping the game on the Windows Store. You'll look at the things to remember to make certification painless and some great tips on how to market and sell your game to the public.
Table of Contents (17 chapters)
Learning Windows 8 Game Development
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Initializing the Direct3D API


Direct3D is a rendering API that allows you to write your game without having to worry about which graphics card or driver the user may have. By separating this concern through the Component Object Model (COM) system, you can easily write once and have your code run on the hardware from NVIDIA or AMD.

Now let's take a look at Direct3DBase.cpp, the class that we inherited from earlier. This is where DirectX is set up and prepared for use. There are a few objects that we need to create here to ensure we have everything required to start drawing.

Graphics device

The first is the graphics device, represented by an ID3D11Device object. The device represents the physical graphics card and the link to a single adapter. It is primarily used to create resources such as textures and shaders, and owns the device context and swap chain.

Direct3D 11.1 also supports the use of feature levels to support older graphics cards that may only support Direct3D 9.0 or Direct3D 10.0 features. When you create the device, you should specify a list of feature levels that your game will support, and DirectX will handle all of the checks to ensure you get the highest feature level you want that the graphics card supports.

You can find the code that creates the graphics device in the Direct3DBase::CreateDeviceResources() method. Here we allow all possible feature levels, which will allow our game to run on older and weaker devices. The key thing to remember here is that if you want to use any graphics features that were introduced after Direct3D 9.0, you will need to either remove the older feature levels from the list or manually check which level you have received and avoid using that feature.

Once we have a list of feature levels, we just need a simple call to the D3D11CreateDevice() method, which will provide us with the device and immediate device context.

Note

nullptr is a new C++11 keyword that gives us a strongly-defined null pointer. Previously NULL was just an alias for zero, which prevented the compiler from supporting us with extra error-checking.

D3D11CreateDevice(
  nullptr,
  D3D_DRIVER_TYPE_HARDWARE,
  nullptr,
  D3D11_CREATE_DEVICE_BGRA_SUPPORT,
  featureLevels,
  ARRAYSIZE(featureLevels),
  D3D11_SDK_VERSION,
  &device,
  &m_featureLevel,
  &context
  );

Most of this is pretty simple: we request a hardware device with BGRA format layout support (see the Swap chain section for more details on texture formats) and provide a list of feature levels that we can support. The magic of COM and Direct3D will provide us with an ID3D11Device and ID3D11DeviceContext that we can use for rendering.

Device context

The device context is probably the most useful item that you're going to create. This is where you will issue all draw calls and state changes to the graphics hardware. The device context works together with the graphics device to provide 99 percent of the commands you need to use Direct3D.

By default we get an immediate context along with our graphics device. One of the main benefits provided by a context system is the ability to issue commands from worker threads using deferred contexts. These deferred contexts can then be passed to the immediate context so that their commands can be issued on a single thread, allowing for multithreading with an API that is not thread-safe.

To create an immediate ID3D11DeviceContext, just pass a pointer to the same method that we used to create the device.

Deferred contexts are generally considered an advanced technique and are outside the scope of this book; however, if you're looking to take full advantage of modern hardware, you will want to take a look at this topic to ensure that you can work with the GPU without limiting yourself to a single CPU core.

If you're trying to remember which object to use when you're rendering, remember that the device is about creating resources throughout the lifetime of the application, while the device context does the work to apply those resources and create the images that are displayed to the user.

Swap chain

Working with Direct3D exposes you to a number of asynchronous devices, all operating at different rates independent of each other. If you drew to the same texture buffer that the monitor used to display to the screen, you would see the monitor display a half-drawn image as it refreshes while you're still drawing. This is commonly known as screen tearing.

To get around this, the concept of a swap chain was created. A swap chain is a series of textures that the monitor can iterate through, giving you time to draw the frame before the monitor needs to display it. Often, this is accomplished with just two texture buffers known as a front buffer and a back buffer. The front buffer is what the monitor will display while you draw to the Back Buffer. When you're finished rendering, the buffers are swapped so that the monitor can display the new frame and Direct3D can begin drawing the next frame; this is known as double buffering.

Sometimes two buffers are not enough; for example, when the monitor is still displaying the previous frame and Direct3D is ready to swap the buffers. This means that the API needs to wait for the monitor to finish displaying the previous frame before it can swap the buffers and let your game continue.

Alternatively, the API may discard the content that was just rendered, allowing the game to continue, but wasting a frame and causing the front buffer to repeat if the monitor wants to refresh while the game is drawing. This is where three buffers can come in handy, allowing the game to continue working and render ahead.

Double buffering

The swap chain is also directly tied to the resolution, and if the display area is resized, the swap chain needs to be recreated. You may think that all the games are full screen now and should not need to be resized! Remember that the snap functionality in Windows 8 resizes the screen, requiring your game to match the available space. Alongside the creation code, we need a way to respond to the window resize events so that we can resize the swap chain and elegantly handle the changes.

All of this happens inside Direct3DBase::CreateWindowSizeDependentResources. Here we will check to see if there is a resize, as well as some orientation-handling checks so that the game can handle rotation if enabled. We want to avoid the work that we don't need to do, and one of the benefits of Direct3D 11.1 is the ability to just resize the buffers in the swap chain. However, the really important code here executes if we do not already have a swap chain.

Many parts of Direct3D rely on creating a description structure that contains the information required to create the device, and then pass that information to a Create() method that will handle the rest of the creation. In this case, we will make use of a DXGI_SWAP_CHAIN_DESC1 structure to describe the swap chain. The following code snippet shows what our structure will look like:

swapChainDesc.Width = static_cast<UINT>(m_renderTargetSize.Width);
swapChainDesc.Height = static_cast<UINT>(m_renderTargetSize.Height);
swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
swapChainDesc.Stereo = false;
swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.SampleDesc.Quality = 0;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferCount = 2;
swapChainDesc.Scaling = DXGI_SCALING_NONE;
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL;

There are a lot of new concepts here, so let's work through each option one by one.

The Width and Height properties are self-explanatory; these come directly from the CoreWindow instance so that we can render at native resolution. If you want to force a different resolution, this would be where you specify that resolution.

The Format defines the layout of the pixels in the texture. Textures are represented as an array of colors, which can be packed in many different ways. The most common way is to lay out the different color channels in a B8G8R8A8 format. This means that the pixel will have a single byte for each channel: Blue, Green, Red, and Alpha, in that order. The UNORM tells the system to store each pixel as an unsigned normalized integer. Put together, this forms DXGI_FORMAT_B8G8R8A8_UNORM.

The BGRA pixel layout

Often, the R8G8B8A8 pixel layout is also used, however, both are well-supported and you can choose either.

The next flag, Stereo, tells the API if you want to take advantage of the Stereoscopic Rendering support in Direct3D 11.1. This is an advanced topic that we won't cover, so leave this as false for now.

The SampleDesc substructure describes our multisampling setting. Multisampling refers to a technique, commonly known as MSAA or Multisample antialiasing. Antialiasing refers to a technique that is used to reduce the sharp, jagged edges on polygons that arise from trying to map a line to the pixels when the line crosses through the middle of the pixel. MSAA resolves this by sampling within the pixel and filtering those values to get a nice average that represents detail smaller than a pixel. With antialiasing you will see nice smooth lines, at the cost of extra rendering and filtering. For our purposes, we will specify a single count, and zero quality, which tells the API to disable MSAA.

The BufferUsage enumeration tells the API how we plan to use the swap chain, which lets it make performance optimizations. This is most commonly used when creating normal textures, and should be left alone for now.

The Scaling parameter defines how the back buffer will be scaled if the texture resolution does not match the resolution that the operating system is providing to the monitor. You only have two options here: DXGI_SCALING_STRETCH and DXGI_SCALING_NONE.

The SwapEffect describes what happens when a swap between the front buffer and the back buffer(s) occurs. We're building a Windows Store application, so we only have one option here: DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL. If we were building a desktop application, we would have a larger selection, and our final choice would depend on our performance and hardware requirements.

Now you may be wondering what DXGI is, and why we have been using it during the swap chain creation. Beginning with Windows Vista and Direct3D 10.0, the DirectX Graphics Infrastructure (DXGI) exists to act as an intermediary between the Direct3D API and the graphics driver. It manages the adapters and common graphics resources, as well as the Desktop Window Manager, which handles compositing multiple Direct3D applications together to allow multiple windows to share the same screen. DXGI manages the screen, and therefore manages the swap chain as well. That is why we have an ID3D11Device and an IDXGISwapChain object.

Once we're done, we need to use this structure to create the swap chain. You may remember, the graphics device creates resources, and that includes the swap chain. The swap chain, however, is a DXGI resource and not a Direct3D resource, so we first need to extract the DXGI device from the Direct3D device before we can continue. Thankfully, the Direct3D device is layered on top of the DXGI device, so we just need to convert the ID3D11Device1 to an IDXGIDevice1 with the following piece of code:

ComPtr<IDXGIDevice1>  dxgiDevice;
DX::ThrowIfFailed(m_d3dDevice.As(&dxgiDevice));

Then we can get the adapter that the device is linked to, and the factory that serves the adapter, with the following code snippet:

ComPtr<IDXGIAdapter> dxgiAdapter;
DX::ThrowIfFailed(
  dxgiDevice->GetAdapter(&dxgiAdapter));

ComPtr<IDXGIFactory2> dxgiFactory;
DX::ThrowIfFailed(
  dxgiAdapter->GetParent(
    __uuidof(IDXGIFactory2), 
    &dxgiFactory
    ));

Using the IDXGIFactory2, we can create the IDXGISwapChain that is tied to the adapter.

Note

The 1 and 2 at the end of IDXGIDevice1 and IDXGIFactory2 differentiate between the different versions of Direct3D and DXGI that exist. Direct3D 11.1 is an add-on to Direct3D 11.0, so we need a way to define the different versions. The same goes for DXGI, which has gone through multiple versions since Vista.

dxgiFactory->CreateSwapChainForCoreWindow(
  m_d3dDevice.Get(),
  reinterpret_cast<IUnknown*>(window),
  &swapChainDesc,
  nullptr,
  &m_swapChain
  );

When we create the swap chain, we need to use a specific method for Windows Store applications, which takes the CoreWindow instance that we received when we created the application as a parameter. This would be where you pass a HWND Window handle, if you were using the old Win32 API. These handles let Direct3D connect the resource to the correct window and ensure that it is positioned properly when composited with the other windows on the screen.

Now we have a swap chain, almost ready for rendering. While we still have the DXGI device, we can also let it know that we want to enable a power-saving mode that ensures only one frame is queued up for display at a time.

dxgiDevice->SetMaximumFrameLatency(1);

This is especially important in Windows Store applications, as your game may be running on a mobile device, and your players wouldn't want to suddenly lose a lot of battery rendering frames that they do not need.

Render target, depth stencil, and viewport

The next step is to get a reference to the back buffer in the swap chain, so that we can make use of it later on. First, we need to get the back buffer texture from the swap chain, which can be easily done with a call to the GetBuffer() method. This will give us a pointer to a texture buffer, which we can use to create a render target view, as follows:

ComPtr<ID3D11Texture2D> backBuffer;
  m_swapChain->GetBuffer(
    0,
    __uuidof(ID3D11Texture2D),
    &backBuffer
    );

Direct3D 10 and later versions provide access to the different graphics resources using constructs called views. These let us tell the API how to use the resource, and provide a way of accessing the resources after creation.

In the following code snippet we are creating a render target view (ID3D11RenderTargetView), which, as the name implies, provides a view into a render target. If you haven't encountered the term before, a render target is a texture that you can draw into, for use later. This allows us to draw to off-screen textures, which we can then use in many different ways to create the final rendered frame.

m_d3dDevice->CreateRenderTargetView(
  backBuffer.Get(),
  nullptr,
  &m_renderTargetView
  )

Now that we have a render target view, we can tell the graphics context to use this as the back buffer and start drawing, but while we're initializing our graphics let's create a depth buffer texture and view so that we can have some depth in our game.

A depth buffer is a special texture that is responsible for storing the depth of each pixel on the screen. This can be used by the GPU to quickly cull pixels that are hidden by other objects. Being able to avoid drawing objects that we cannot see is important, as drawing those objects still takes time, even though they do not contribute to the scene. Previously, I mentioned that we need to draw a frame in a small amount of time to achieve certain frame rates.

In complex games, this can be difficult to achieve if we are drawing everything, so culling is important to ensure that we can achieve the performance we want.

The depth buffer is an optional feature that isn't automatically generated with the swap chain, so we need to create it ourselves. To do this, we need to describe the texture we want to create with a D3D11_TEXTURE2D_DESC structure. Direct3D 11.1 provides a nice helper structure in the form of a CD3D11_TEXTURE2D_DESC that handles filling in common values for us, as follows:

CD3D11_TEXTURE2D_DESC depthStencilDesc(
  DXGI_FORMAT_D24_UNORM_S8_UINT, 
  static_cast<UINT>(m_renderTargetSize.Width),
  static_cast<UINT>(m_renderTargetSize.Height),
  1,
  1,
  D3D11_BIND_DEPTH_STENCIL
  );

Here we're asking for a texture that has a pixel format of 24 bits for the depth, in an unsigned normalized integer, and 8 bits for the stencil in an unsigned integer. The Stencil part of this buffer is an advanced feature that lets you assign a value to pixels in the texture. This is most often used for creating a mask, and support is provided for only rendering to regions with a specific stencil value.

After this, we will set the width and height to match the swap chain, and fill in the Array Size and Mip Levels so that we can reach the parameter that lets us describe the usage of the texture. The Array Size refers to the number of textures to create. If you want an array of textures combined as a single resource, you can use this parameter to specify the count, but we only want one texture, so we will set this to 1.

Mip Levels are increasingly smaller textures that match the main texture. This is used to allow for performance optimizations when rendering the texture at a distance where the original resolution is overkilled. For example, say you have a screen resolution of 800 by 600. If you want three Mip levels, you will receive an 800 x 600 texture, a 400 x 300 texture, and a 200 x 150 texture. The graphics card has hardware to filter and must use the correct texture, thus reducing the amount of wastage involved in rendering. Our depth buffer here will never be rendered at a distance, so we don't need to use up extra memory providing different Mip Levels; we will just set this to 1 to say that we only want the original resolution texture.

Finally, we will tell the structure that we want this texture to be bound as a depth stencil. This lets the driver make optimizations to ensure that this special texture can be quickly accessed where needed. We round this out by creating the texture using the following description structure:

m_d3dDevice->CreateTexture2D(
  &depthStencilDesc,
  nullptr,
  &depthStencil
  )

Now that we have a depth buffer texture, we need a depth stencil view (ID3D11DepthStencilView) to bind it, as with our render target earlier. We will use another description structure for it (CD3D11_DEPTH_STENCIL_VIEW_DESC). However, we can get away with just a single parameter and the type of texture; in this case it is a D3D11_DSV_DIMENSION_TEXTURE2D. We can then create the view, ready for use, as shown:

m_d3dDevice->CreateDepthStencilView(
  depthStencil.Get(),
  &depthStencilViewDesc,
  &m_depthStencilView
  )

Now that we have a device, context, swap chain, render target, and depth buffer, we just need to describe one more thing before we're ready to kick off the game loop. The viewport describes the layout of the area we want to render. In most cases, you will just define the full size of the render target here; however, some situations may need you to draw to just a small section of the screen, maybe for a split screen mode. The viewport lets you define the region once and render as normal for that region, and then define a new viewport for the new region so you can render into it.

To create this viewport, we just need to specify the x and y coordinates of the top-left corner, as well as the width and height of the viewport. We want to use the entire screen, so our top-left corner is x = 0, y = 0, and our width and height match our render target, as shown:

CD3D11_VIEWPORT viewport(
  0.0f,
  0.0f,
  m_renderTargetSize.Width,
  m_renderTargetSize.Height
  );

m_d3dContext->RSSetViewports(1, &viewport);

Now that we have created everything, we just need to finish up by setting the render target view and depth stencil view so that the API knows to use them. This is done with a single call to the m_d3dContext->OMSetRenderTargets() method, passing the number of render targets, a pointer to the first render target view pointer, and a pointer to the depth stencil view.