Book Image

Learning Microsoft Cognitive Services - Second Edition

By : Leif Larsen
Book Image

Learning Microsoft Cognitive Services - Second Edition

By: Leif Larsen

Overview of this book

Microsoft has revamped its Project Oxford to launch the all new Cognitive Services platform-a set of 30 APIs to add speech, vision, language, and knowledge capabilities to apps. This book will introduce you to 24 of the APIs released as part of Cognitive Services platform and show you how to leverage their capabilities. More importantly, you'll see how the power of these APIs can be combined to build real-world apps that have cognitive capabilities. The book is split into three sections: computer vision, speech recognition and language processing, and knowledge and search. You will be taken through the vision APIs at first as this is very visual, and not too complex. The next part revolves around speech and language, which are somewhat connected. The last part is about adding real-world intelligence to apps by connecting them to Knowledge and Search APIs. By the end of this book, you will be in a position to understand what Microsoft Cognitive Service can offer and how to use the different APIs.
Table of Contents (19 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Analyzing emotions in videos


Earlier, we looked at analyzing emotions in images. We can do the same analysis with videos as well.

To be able to do this, we can modify the existing example for the Video API.

Start by adding Microsoft.ProjectOxford.Emotion as a NuGet client package.

Next we add Emotion to the AvailableOperations enum. In the VideoOperations class, add a new case for this value in CreateVideoOperationSettings. Return null, as we do not need any video-operation settings for this.

Add a private member to VideoOperations:

    private EmotionServiceClient _emotionServiceClient; 

Initialize this in the constructor, using the API key you registered earlier.

In the VideoOperationResultEventArgs, add a new property called EmotionResult. This should be of the type VideoAggregateRecognitionResult.

Back in the VideoOperations class, copy the GetVideoOperationResultAsync function. Rename it to GetVideoEmotionResultAsync and change the accepted parameter to VideoEmotionRecognitionOperation. Inside...