Book Image

Machine Learning for Emotion Analysis in Python

By : Allan Ramsay, Tariq Ahmad
5 (1)
Book Image

Machine Learning for Emotion Analysis in Python

5 (1)
By: Allan Ramsay, Tariq Ahmad

Overview of this book

Artificial intelligence and machine learning are the technologies of the future, and this is the perfect time to tap into their potential and add value to your business. Machine Learning for Emotion Analysis in Python helps you employ these cutting-edge technologies in your customer feedback system and in turn grow your business exponentially. With this book, you’ll take your foundational data science skills and grow them in the exciting realm of emotion analysis. By following a practical approach, you’ll turn customer feedback into meaningful insights assisting you in making smart and data-driven business decisions. The book will help you understand how to preprocess data, build a serviceable dataset, and ensure top-notch data quality. Once you’re set up for success, you’ll explore complex ML techniques, uncovering the concepts of deep neural networks, support vector machines, conditional probabilities, and more. Finally, you’ll acquire practical knowledge using in-depth use cases showing how the experimental results can be transformed into real-life examples and how emotion mining can help track short- and long-term changes in public opinion. By the end of this book, you’ll be well-equipped to use emotion mining and analysis to drive business decisions.
Table of Contents (18 chapters)
1
Part 1:Essentials
3
Part 2:Building and Using a Dataset
7
Part 3:Approaches
14
Part 4:Case Study

Transformers for classification

Transformer models are trained as language models. These are a type of algorithm that has been trained by analyzing patterns of human language to understand and produce human language.

They have knowledge of grammar, syntax, and semantics, and can discern patterns and connections among words and phrases. Moreover, they can detect named entities, such as individuals, locations, and establishments, and interpret the context in which they are referenced. Essentially, a transformer model is a computer program that uses statistical models to analyze and generate language.

Language models are trained in a self-supervised manner on large amounts of text data, such as books, articles, and online content, to learn patterns and relationships between words and phrases. Some of the popular datasets used for pretraining transformers include Common Crawl, Wikipedia, and BooksCorpus. For example, BERT was trained using around 3.5 billion words in total with around...