Book Image

Unity 2017 Game Optimization - Second Edition

By : Chris Dickinson
Book Image

Unity 2017 Game Optimization - Second Edition

By: Chris Dickinson

Overview of this book

Unity is an awesome game development engine. Through its massive feature-set and ease-of-use, Unity helps put some of the best processing and rendering technology in the hands of hobbyists and professionals alike. This book shows you how to make your games fly with the recent version of Unity 2017, and demonstrates that high performance does not need to be limited to games with the biggest teams and budgets. Since nothing turns gamers away from a game faster than a poor user-experience, the book starts by explaining how to use the Unity Profiler to detect problems. You will learn how to use stopwatches, timers and logging methods to diagnose the problem. You will then explore techniques to improve performance through better programming practices. Moving on, you will then learn about Unity’s built-in batching processes; when they can be used to improve performance, and their limitations. Next, you will import your art assets using minimal space, CPU and memory at runtime, and discover some underused features and approaches for managing asset data. You will also improve graphics, particle system and shader performance with a series of tips and tricks to make the most of GPU parallel processing. You will then delve into the fundamental layers of the Unity3D engine to discuss some issues that may be difficult to understand without a strong knowledge of its inner-workings. The book also introduces you to the critical performance problems for VR projects and how to tackle them. By the end of the book, you will have learned to improve the development workflow by properly organizing assets and ways to instantiate assets as quickly and waste-free as possible via object pooling.
Table of Contents (17 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Software and Hardware List
Preface

Final thoughts on Profiling and Analysis


One way of thinking about performance optimization is the act of stripping away unnecessary tasks that spend valuable resources. We can do the same and maximize our own productivity through minimizing any wasted effort. Effective use of the tools we have at our disposal is of paramount importance. It would serve us well to optimize our own workflow by keeping aware of some best practices and techniques.

Most, if not all, advice for using any kind of data-gathering tool properly can be summarized into three different strategies:

  • Understanding the tool
  • Reducing noise
  • Focusing on the issue

Understanding the Profiler

The Profiler is an arguably well-designed and intuitive tool, so understanding the majority of its feature set can be gained by simply spending an hour or two exploring its options with a test project and reading its documentation. The more we know about a tool in terms of its benefits, pitfalls, features, and limitations, the more sense we can make of the information it is giving us, so it is worth spending the time to use it in a playground setting. We don't want to be two weeks away from release, with a hundred performance defects to fix, with no idea how to do performance analysis efficiently.

For example, always remain aware of the relative nature of the Timeline View's graphical display. The Timeline View does not provide values on its vertical axis and automatically readjusts this axis based on the content of the last 300 frames; it can make small spikes appear to be a bigger problem than they really are because of the relative change. So, just because a spike or resting state in the timeline seems large and threatening does not necessarily mean there is a performance issue.

Several Areas in the Timeline View provide helpful benchmark bars, which appear as horizontal lines with a timing and FPS value associated with them. These should be used to determine the magnitude of the problem. Don't let the Profiler trick us into thinking that big spikes are always bad. As always, it's only important if the user will notice it.

As an example, if a large CPU usage spike does not exceed the 60 FPS or 30 FPS benchmark bars (depending on the application's target frame rate), then it would be wise to ignore it and search elsewhere for CPU performance issues, as no matter how much we improve the offending piece of code, it will probably never be noticed by the end user, and therefore isn't a critical issue that affects user experience.

Reducing noise

The classical definition of noise (at least in the realm of computer science) is meaningless data, and a batch of profiling data that was blindly captured with no specific target in mind is always full of data that won't interest us. More sources of data takes more time to mentally process and filter, which can be very distracting. One of the best methods to avoid this is to simply reduce the amount of data we need to process by stripping away any data deemed non-vital to the current situation.

Reducing clutter in the Profiler's graphical interface will make it easier to determine which subsystems are causing a spike in resource usage. Remember to use the colored checkboxes in each Timeline View Area to narrow the search.

Note

Be warned that these settings are auto-saved in the Editor, so ensure that you re-enable them for the next profiling session, as this might cause us to miss something important next time.

Also, GameObjects can be deactivated to prevent them from generating profiling data, which will also help reduce clutter in our profiling data. This will naturally cause a slight performance boost for each object we deactivate. However, if we're gradually deactivating objects and performance suddenly becomes significantly more acceptable when a specific object is deactivated, then clearly that object is related to the root cause of the problem.

Focusing on the issue

This category may seem redundant, given that we've already covered reducing noise. All we should have left is the issue at hand, right? Not exactly. Focus is the skill of not letting ourselves become distracted by inconsequential tasks and wild goose chases.

Recall that profiling with the Unity Profiler comes with a minor performance cost. This cost is even more severe when using the Deep Profiling option. We might even introduce more minor performance costs into our application with additional logging. It's easy to forget when and where we introduced profiling code if the hunt continues for several hours.

We are effectively changing the result by measuring it. Any changes we implement during data sampling can sometimes lead us to chase after non-existent bugs in the application, when we could have saved ourselves a lot of time by attempting to replicate the scenario without additional profiling instrumentation. If the bottleneck is reproducible and noticeable without profiling, then it's a candidate to begin an investigation. However, if new bottlenecks keep appearing in the middle of an existing investigation, then keep in mind that they could be bottlenecks we introduced with our test code and not an existing problem that's been newly exposed.

Finally, when we have finished profiling, completed our fixes, and are now ready to move on to the next investigation, we should make sure to profile the application one last time to verify that the changes have had the intended effect.