Book Image

Mastering NLP from Foundations to LLMs

By : Lior Gazit, Meysam Ghaffari
Book Image

Mastering NLP from Foundations to LLMs

By: Lior Gazit, Meysam Ghaffari

Overview of this book

Do you want to master Natural Language Processing (NLP) but don’t know where to begin? This book will give you the right head start. Written by leaders in machine learning and NLP, Mastering NLP from Foundations to LLMs provides an in-depth introduction to techniques. Starting with the mathematical foundations of machine learning (ML), you’ll gradually progress to advanced NLP applications such as large language models (LLMs) and AI applications. You’ll get to grips with linear algebra, optimization, probability, and statistics, which are essential for understanding and implementing machine learning and NLP algorithms. You’ll also explore general machine learning techniques and find out how they relate to NLP. Next, you’ll learn how to preprocess text data, explore methods for cleaning and preparing text for analysis, and understand how to do text classification. You’ll get all of this and more along with complete Python code samples. By the end of the book, the advanced topics of LLMs’ theory, design, and applications will be discussed along with the future trends in NLP, which will feature expert opinions. You’ll also get to strengthen your practical skills by working on sample real-world NLP business problems and solutions.
Table of Contents (14 chapters)

Splitting data

When developing a machine learning model, it’s important to split the data into training, validation, and test sets; this is called data splitting. This is done to evaluate the performance of the model on new, unseen data and to prevent overfitting.

The most common method for splitting the data is the train-test split, which splits the data into two sets: the training set, which is used to train the model, and the test set, which is used to evaluate the performance of the model. The data is randomly divided into two sets, with a typical split being 80% of the data for training and 20% for testing. Using this approach the model will be trained using the majority of the data (training data) and then tested on the remaining data (test set). Using this approach, we can ensure that the model’s performance is based on new, unseen data.

Most of the time in machine learning model development, we have a set of hyperparameters for our model that we like to...