-
Book Overview & Buying
-
Table Of Contents
Mastering NLP From Foundations to Agents - Second Edition
By :
In this chapter, we delve deep into the intricate world of LLMs and the underpinning mathematical concepts that fuel their performance. The advent of these models has revolutionized the field of natural language processing (NLP), offering unparalleled proficiency in understanding, generating, and interacting with human language.
As we explore the operations of LLMs, we will introduce the key metric of perplexity, a measurement of uncertainty that is pivotal in determining the performance of these models. A lower perplexity indicates the confidence that a language model (LM) has in predicting the next word in a sequence, thus showcasing its proficiency. Adding to that, while perplexity remains an important indicator of a model’s confidence during training, recent modern LLMs are evaluated using a range of new benchmarks, measuring not only language modeling ability, but also reasoning, factuality, safety, and robustness...
Change the font size
Change margin width
Change background colour