Throughout this chapter, we have covered the Emotion API and the Video API. We started by making the smart-house application see what kind of mood you are in. Following this, we dived deep into the Video API, where we learned how to detect and track faces, detect motion, stabilize videos, and generate intelligent video thumbnails. To end the chapter, we moved back to the Emotion API, where we learned how to do emotion analysis on videos.
In the following chapter, we are moving away from the Vision APIs and into the first Language API. We will learn how to understand intent in sentences, using the power of the Language Understanding Intelligent Service (LUIS).