-
Book Overview & Buying
-
Table Of Contents
LLM Design Patterns
By :
Adversarial attacks on LLMs are designed to manipulate the model’s output by making small, often imperceptible changes to the input. These attacks can expose vulnerabilities in LLMs and potentially lead to security risks or unintended behaviors in real-world applications.
In this chapter, we’ll discover techniques for creating and defending against adversarial examples in LLMs. Adversarial examples are carefully crafted inputs designed to intentionally mislead the model into producing incorrect or unexpected outputs. You’ll learn about textual adversarial attacks, methods to generate these examples, and techniques to make your models more robust. We’ll also cover evaluation methods and discuss the real-world implications of adversarial attacks on LLMs.
In this chapter, we’ll be covering the following topics:
Change the font size
Change margin width
Change background colour