Book Image

Game Audio Development with Unity 5.X

By : Micheal Lanham
Book Image

Game Audio Development with Unity 5.X

By: Micheal Lanham

Overview of this book

Game Audio is one of the key components in making a game successful and it is quite popular in the gaming industry. So if you are a game developer with an eye on capturing the gamer market then this book is the right solution for you. In this book, we will take you through a step by step journey which will teach you to implement original and engaging soundtracks and SFX with Unity 5.x. You will be firstly introduced to the basics of game audio and sound development in Unity. After going through the core topics of audio development: audio sources, spatial sound, mixing, effects, and more; you will then have the option of delving deeper into more advanced topics like dynamic and adaptive audio. You will also learn to develop dynamic and adaptive audio using the Unity Audio Mixer. Further, you will learn how professional third party tools like FMOD are used for audio development in Unity. You will then go through the creation of sound visualization techniques and creating your own original music using the simple yet powerful audio workstation Reaper. Lastly, you will go through tips, techniques and strategies to help you optimize game audio performance or troubleshoot issues. At the end of the book, you’ll have gained the skills to implement professional sound and music. Along with a good base knowledge audio and music principles you can apply across a range of other game development tools.
Table of Contents (21 chapters)
Title Page
Credits
About the Author
About the Reviewer
Acknowledgments
www.PacktPub.com
Customer Feedback
Dedication
Foreword
Preface

Summary


During this chapter, we extended our audio visual capabilities we developed in the last chapter into real-time character lip syncing. We first put together a novel lip sync example using just our basic audio visualizer. From there, we looked at how we could make our lip syncing more realistic and natural by understanding speech phonemes. After that, we ran through an exercise on understanding how to classify speech phonemes by using our audio visualizer. This lead to our need to animate the characters jaw and facial muscles using a combination of bone and blend shape animation. We also took time exploring by attempting to model each of the base speech phonemes. With all that knowledge gained, we then put everything together and tested the real-time lip syncing. Finally, we looked at a component that would allow us to use the microphone to record directly to Unity. Something we certainly could find useful.

In the next chapter, we enter a new more fundamental area of audio development...