Book Image

Mastering NLP from Foundations to LLMs

By : Lior Gazit, Meysam Ghaffari
Book Image

Mastering NLP from Foundations to LLMs

By: Lior Gazit, Meysam Ghaffari

Overview of this book

Do you want to master Natural Language Processing (NLP) but don’t know where to begin? This book will give you the right head start. Written by leaders in machine learning and NLP, Mastering NLP from Foundations to LLMs provides an in-depth introduction to techniques. Starting with the mathematical foundations of machine learning (ML), you’ll gradually progress to advanced NLP applications such as large language models (LLMs) and AI applications. You’ll get to grips with linear algebra, optimization, probability, and statistics, which are essential for understanding and implementing machine learning and NLP algorithms. You’ll also explore general machine learning techniques and find out how they relate to NLP. Next, you’ll learn how to preprocess text data, explore methods for cleaning and preparing text for analysis, and understand how to do text classification. You’ll get all of this and more along with complete Python code samples. By the end of the book, the advanced topics of LLMs’ theory, design, and applications will be discussed along with the future trends in NLP, which will feature expert opinions. You’ll also get to strengthen your practical skills by working on sample real-world NLP business problems and solutions.
Table of Contents (14 chapters)

Model underfitting and overfitting

In machine learning, the ultimate goal is to build a model that can generalize well on unseen data. However, sometimes, a model can fail to achieve this goal due to either underfitting or overfitting.

Underfitting occurs when a model is too simple to capture the underlying patterns in the data. In other words, the model can’t learn the relationship between the features and the target variable properly. This can result in poor performance on both the training and testing data. For example, in Figure 3.4, we can see that the model is underfitted, and it cannot present the data very well. This is not what we like in machine learning models, and we usually like to see a precise model, as shown in Figure 3.5:

Figure 3.4 – The machine learning model underfitting on the training data

Figure 3.4 – The machine learning model underfitting on the training data

Underfitting happens when the model is not trained well, or the model complexity is not enough to catch the underlying pattern...