Book Image

Machine Learning Techniques for Text

By : Nikos Tsourakis
Book Image

Machine Learning Techniques for Text

By: Nikos Tsourakis

Overview of this book

With the ever-increasing demand for machine learning and programming professionals, it's prime time to invest in the field. This book will help you in this endeavor, focusing specifically on text data and human language by steering a middle path among the various textbooks that present complicated theoretical concepts or focus disproportionately on Python code. A good metaphor this work builds upon is the relationship between an experienced craftsperson and their trainee. Based on the current problem, the former picks a tool from the toolbox, explains its utility, and puts it into action. This approach will help you to identify at least one practical use for each method or technique presented. The content unfolds in ten chapters, each discussing one specific case study. For this reason, the book is solution-oriented. It's accompanied by Python code in the form of Jupyter notebooks to help you obtain hands-on experience. A recurring pattern in the chapters of this book is helping you get some intuition on the data and then implement and contrast various solutions. By the end of this book, you'll be able to understand and apply various techniques with Python for text preprocessing, text representation, dimensionality reduction, machine learning, language modeling, visualization, and evaluation.
Table of Contents (13 chapters)

Taxonomy of machine learning techniques

The discussion in the previous section should have helped you understand the reason behind the ML paradigm. However, it only corresponds to one type of learning. ML algorithms can be trained differently, with each method having advantages and disadvantages. Broadly, they can be categorized into four main types: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Let’s examine each one in the following sections.

Supervised learning

In supervised learning, also called inductive learning, we work with labeled data that teaches the model to yield the desired output. For example, a dataset with emails labeled as either spam or non-spam can be used to train a model for spam filtering. It’s called supervised because by knowing the correct label for each sample, we can supervise the learning process and correct the model during training, just like a teacher in the classroom. This type...