Book Image

Voice User Interface Projects

By : Henry Lee
Book Image

Voice User Interface Projects

By: Henry Lee

Overview of this book

From touchscreen and mouse-click, we are moving to voice- and conversation-based user interfaces. By adopting Voice User Interfaces (VUIs), you can create a more compelling and engaging experience for your users. Voice User Interface Projects teaches you how to develop voice-enabled applications for desktop, mobile, and Internet of Things (IoT) devices. This book explains in detail VUI and its importance, basic design principles of VUI, fundamentals of conversation, and the different voice-enabled applications available in the market. You will learn how to build your first voice-enabled application by utilizing DialogFlow and Alexa’s natural language processing (NLP) platform. Once you are comfortable with building voice-enabled applications, you will understand how to dynamically process and respond to the questions by using NodeJS server deployed to the cloud. You will then move on to securing NodeJS RESTful API for DialogFlow and Alexa webhooks, creating unit tests and building voice-enabled podcasts for cars. Last but not the least you will discover advanced topics such as handling sessions, creating custom intents, and extending built-in intents in order to build conversational VUIs that will help engage the users. By the end of the book, you will have grasped a thorough knowledge of how to design and develop interactive VUIs.
Table of Contents (12 chapters)

Deploying Fortune Cookie to Google Home

One of the biggest advantages of working with Dialogflow is that the Fortune Cookie that you built for Google Assistant can also work with Google Home with hardly any effort. Both Google Home and Google Assistant are integrated into Dialogflow, and the application will work on both devices. The only thing that's different about Google Home is that Google Home devices cannot display visual elements like phones can. This is the reason why whenever you create a response sent back to Dialogflow from your server, you always have to have the displayText for devices that do not support visual elements.

The following code recaps a typical fulfillment response that's sent back to Dialogflow from the server, which contains the SSML response for the audio response and displayText as a fallback response in case the device does not support...