Book Image

Mastering NLP from Foundations to LLMs

By : Lior Gazit, Meysam Ghaffari
Book Image

Mastering NLP from Foundations to LLMs

By: Lior Gazit, Meysam Ghaffari

Overview of this book

Do you want to master Natural Language Processing (NLP) but don’t know where to begin? This book will give you the right head start. Written by leaders in machine learning and NLP, Mastering NLP from Foundations to LLMs provides an in-depth introduction to techniques. Starting with the mathematical foundations of machine learning (ML), you’ll gradually progress to advanced NLP applications such as large language models (LLMs) and AI applications. You’ll get to grips with linear algebra, optimization, probability, and statistics, which are essential for understanding and implementing machine learning and NLP algorithms. You’ll also explore general machine learning techniques and find out how they relate to NLP. Next, you’ll learn how to preprocess text data, explore methods for cleaning and preparing text for analysis, and understand how to do text classification. You’ll get all of this and more along with complete Python code samples. By the end of the book, the advanced topics of LLMs’ theory, design, and applications will be discussed along with the future trends in NLP, which will feature expert opinions. You’ll also get to strengthen your practical skills by working on sample real-world NLP business problems and solutions.
Table of Contents (14 chapters)

Text classification using Word2Vec

One of the methods to perform text classification is to convert the words into embedding vectors so that you can use those vectors for classification. Word2Vec is a well-known method to perform this task.

Word2Vec

Word2Vec is a group of neural network-based models that are used to create word embeddings, which are dense vector representations of words in a continuous vector space. These embeddings capture the semantic meaning and relationships between words based on the context in which they appear in the text. Word2Vec has two main architectures. As mentioned previously, the two main architectures that were designed to learn word embeddings are CBOW and skip-gram. Both architectures are designed to learn word embeddings by predicting words based on their surrounding context:

  • CBOW: The CBOW architecture aims to predict the target word given its surrounding context words. It takes the average of the context word embeddings as input and...