Book Image

Machine Learning Techniques for Text

By : Nikos Tsourakis
Book Image

Machine Learning Techniques for Text

By: Nikos Tsourakis

Overview of this book

With the ever-increasing demand for machine learning and programming professionals, it's prime time to invest in the field. This book will help you in this endeavor, focusing specifically on text data and human language by steering a middle path among the various textbooks that present complicated theoretical concepts or focus disproportionately on Python code. A good metaphor this work builds upon is the relationship between an experienced craftsperson and their trainee. Based on the current problem, the former picks a tool from the toolbox, explains its utility, and puts it into action. This approach will help you to identify at least one practical use for each method or technique presented. The content unfolds in ten chapters, each discussing one specific case study. For this reason, the book is solution-oriented. It's accompanied by Python code in the form of Jupyter notebooks to help you obtain hands-on experience. A recurring pattern in the chapters of this book is helping you get some intuition on the data and then implement and contrast various solutions. By the end of this book, you'll be able to understand and apply various techniques with Python for text preprocessing, text representation, dimensionality reduction, machine learning, language modeling, visualization, and evaluation.
Table of Contents (13 chapters)

What this book covers

Chapter 1, Introducing Machine Learning for Text, presents the main techniques for machine learning for text, the relevant terminology, and the implications while using text corpora. You will familiarize yourself with the basic concepts behind text processing and the special challenges encountered while treating human language. We also discuss the notion of what a machine can learn, along with a taxonomy of different types of learning. The chapter completes by introducing the importance of visualization and evaluation techniques.

Chapter 2, Detecting Spam Emails, presents a typical exercise in machine learning for text: spam detection. The aim is to create classifiers that distinguish between spam and non-spam emails using an open source dataset. The chapter elaborates on why it is difficult to feature select on this kind of problem and introduces the basic techniques for representing text data and preprocessing it. The chapter focuses on supervised learning using the Naïve Bayes and SVM algorithms that are evaluated on standard performance metrics.

Chapter 3, Classifying Topics of Newsgroup Posts, deals with the problem of assigning a topic label to some piece of text. Again, new concepts and techniques are presented using an open source dataset. The exploratory data analysis step is formalized, and you become acquainted with the notion of dimensionality reduction using PCA and LDA. The chapter focuses on unsupervised learning. Word embedding is the new text representation introduced in the chapter, and the analysis is based on the KNN and Random Forests algorithms.

Chapter 4, Extracting Sentiments from Product Reviews, presents an analysis of how to extract the sentiment from a given corpus. You will learn how to extend the exploratory data analysis and how to use dimensionality reduction not only for visualization but also for feature selection. The focus is now on deep learning techniques, and to facilitate their explanation, the chapter discusses linear and logistic regression. Concepts related to minimizing loss and gradient descent constitute part of this discussion. You will learn how to construct, train, and test a deep neural network model in Keras for sentiment analysis.

Chapter 5, Recommending Music Titles, deals with recommender systems and how they can be incorporated to suggest music titles to customers. Systems of this kind can be categorized into content-based and collaborative-filtering types, and both are presented throughout the chapter. Using an open source dataset, we apply t-SNE and RBM to provide meaningful recommendations for the problem under study. Tuning is also an essential part of any machine learning algorithm, and this chapter dedicates some discussion on grid search for identifying the optimal combination of the hyperparameters.

Chapter 6, Teaching Machines to Translate, presents various techniques for machine translation. Rule-based and statistical machine translation constitute an excellent way to introduce fundamental concepts on the topic. You will become familiar with typical NLP methods such as POS tagging, parse trees, and NER. The discussion on deep learning models becomes more challenging as the focus is now on sequence-to-sequence learning. An extended section describes in detail the famous encoder/decoder architectures using RNN and LSTM. A seq2seq model is put into action to create an English-to-French translator, and the chapter ends with a typical evaluation of machine translation systems based on the BLEU score.

Chapter 7, Summarizing Wikipedia Articles, performs text summarization with data scraped from the internet and Wikipedia, and for this task, you will learn how to incorporate web scraping tools. After presenting a few basic text summarization techniques and applying them to the scraped data, the discussion moves to more advanced topics. You will learn the concept of attention, frequently encountered in deep learning models, and become accustomed with state-of-the-art models such as the Transformer. We train a Transformer network on Wikipedia articles to extract their summaries. The ROUGE score is used to assess the summarization quality as a measure of performance.

Chapter 8, Detecting Hateful and Offensive Language, deals with how to identify hate and offensive language on Twitter. We use the BERT language model based on the Transformer architecture, which permits the fine-tuning of pre-trained models, with our custom datasets. We also examine the role of the validation set to fine-tune the model’s hyperparameters and the strategies for dealing with imbalanced data. The classification tasks are based on boosting algorithms and CNN.

Chapter 9, Generating Text in Chatbots, focuses on the implementation of retrieval-based and generative chatbots. A gamut of NLP techniques is presented throughout the chapter starting from simple regular expressions. Then, we move into more sophisticated solutions based on deep learning. We present how to create language models from scratch or fine-tune a pre-trained one. You will also become acquainted with reinforcement learning and also how to create GUIs that can host the implemented chatbot. Finally, we present perplexity as an evaluation metric and discuss TensorBoard, which helps us shed light on the internal mechanics of deep neural networks.

Chapter 10, Clustering Speech-to-Text Transcriptions, performs clustering on transcribed speech to assign them into different groups. We use a system that can automatically transform human speech into text and examine how to assess its performance using WER. The clustering methods introduced are hierarchical clustering, k-means, and DBSCAN. Finally, there is a relevant discussion on how to choose the optimal number of clusters. The chapter concludes by applying soft clustering and LDA to identify the topics in the dataset.