Book Image

Machine Learning for Emotion Analysis in Python

By : Allan Ramsay, Tariq Ahmad
5 (1)
Book Image

Machine Learning for Emotion Analysis in Python

5 (1)
By: Allan Ramsay, Tariq Ahmad

Overview of this book

Artificial intelligence and machine learning are the technologies of the future, and this is the perfect time to tap into their potential and add value to your business. Machine Learning for Emotion Analysis in Python helps you employ these cutting-edge technologies in your customer feedback system and in turn grow your business exponentially. With this book, you’ll take your foundational data science skills and grow them in the exciting realm of emotion analysis. By following a practical approach, you’ll turn customer feedback into meaningful insights assisting you in making smart and data-driven business decisions. The book will help you understand how to preprocess data, build a serviceable dataset, and ensure top-notch data quality. Once you’re set up for success, you’ll explore complex ML techniques, uncovering the concepts of deep neural networks, support vector machines, conditional probabilities, and more. Finally, you’ll acquire practical knowledge using in-depth use cases showing how the experimental results can be transformed into real-life examples and how emotion mining can help track short- and long-term changes in public opinion. By the end of this book, you’ll be well-equipped to use emotion mining and analysis to drive business decisions.
Table of Contents (18 chapters)
1
Part 1:Essentials
3
Part 2:Building and Using a Dataset
7
Part 3:Approaches
14
Part 4:Case Study

Multiclassifiers

In the preceding chapters, we saw that multi-label datasets, where a tweet may have zero, one, or more labels, are considerably harder to deal with than simple multi-class datasets where each tweet has exactly one label, albeit drawn from a set of more than one option. In this chapter, we will investigate ways of dealing with these cases, looking in particular at the use of neutral as a label for handling cases where a tweet is allowed to have zero labels; at using varying thresholds to enable standard classifiers to return a variable number of labels; and at training multiple classifiers, one per label, and allowing them each to make a decision about the label they were trained for. The conclusion, as ever, will be that there is no single “silver bullet” that provides the best solution in every case, but in general, the use of multiple classifiers tends to be better than the other approaches.

In this chapter, we’ll cover the following topics...