Book Image

Hands-On Chatbots and Conversational UI Development

By : Srini Janarthanam
Book Image

Hands-On Chatbots and Conversational UI Development

By: Srini Janarthanam

Overview of this book

Conversation as an interface is the best way for machines to interact with us using the universally accepted human tool that is language. Chatbots and voice user interfaces are two flavors of conversational UIs. Chatbots are real-time, data-driven answer engines that talk in natural language and are context-aware. Voice user interfaces are driven by voice and can understand and respond to users using speech. This book covers both types of conversational UIs by leveraging APIs from multiple platforms. We'll take a project-based approach to understand how these UIs are built and the best use cases for deploying them. We'll start by building a simple messaging bot from the Facebook Messenger API to understand the basics of bot building. Then we move on to creating a Task model that can perform complex tasks such as ordering and planning events with the newly-acquired-by-Google Dialogflow and Microsoft Bot framework. We then turn to voice-enabled UIs that are capable of interacting with users using speech with Amazon Alexa and Google Home. By the end of the book, you will have created your own line of chatbots and voice UIs for multiple leading platforms.
Table of Contents (11 chapters)

Implementing the chatbot

Now that we have the backend tasks ready, let's focus on the chatbot itself. In general, the chatbot will take the user's utterances as input and respond with utterances of its own. However, since we are building a chatbot for Facebook Messenger, our chatbot will mostly take input in the form of button presses and respond using both utterances and visually appealing cards.

Let's start by implementing the Chatbot.java class. We will begin by working out an algorithm to process and respond to users' utterances:

  1. Process user input.
  2. Update context.
  3. Identify bot intent.
  4. Generate bot utterance and output structure.
  5. Respond.

This one is a very simple algorithm to start with. First, user input, in the form of utterances or button presses is processed. Then the context of the conversation is updated. In the next step, we identify what...