Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying AI-Native LLM Security
  • Table Of Contents Toc
AI-Native LLM Security

AI-Native LLM Security

By : Vaibhav Malik, Ken Huang, Ads Dawson
close
close
AI-Native LLM Security

AI-Native LLM Security

By: Vaibhav Malik, Ken Huang, Ads Dawson

Overview of this book

Adversarial AI attacks present a unique set of security challenges, exploiting the very foundation of how AI learns. This book explores these threats in depth, equipping cybersecurity professionals with the tools needed to secure generative AI and LLM applications. Rather than skimming the surface of emerging risks, it focuses on practical strategies, industry standards, and recent research to build a robust defense framework. Structured around actionable insights, the chapters introduce a secure-by-design methodology, integrating threat modeling and MLSecOps practices to fortify AI systems. You’ll discover how to leverage established taxonomies from OWASP, NIST, and MITRE to identify and mitigate vulnerabilities. Through real-world examples, the book highlights best practices for incorporating security controls into AI development life cycles, covering key areas such as CI/CD, MLOps, and open-access LLMs. Built on the expertise of its co-authors—pioneers in the OWASP Top 10 for LLM applications—this guide also addresses the ethical implications of AI security, contributing to the broader conversation on trustworthy AI. By the end of this book, you’ll be able to develop, deploy, and secure AI technologies with confidence and clarity. *Email sign-up and proof of purchase required
Table of Contents (23 chapters)
close
close
Lock Free Chapter
1
Part 1: Foundations of LLM Security
7
Part 2: The OWASP Top 10 for LLM Applications
12
Part 3: Building Secure LLM Systems
1
Appendices: Latest OWASP Top 10 for LLM and OWASP AIVSS Agentic AI Core Risks

3

The Dual Nature of LLM Risks: Inherent Vulnerabilities and Malicious Actors

As we saw in the last two chapters, large language models (LLMs) are revolutionizing how we interact with AI, but they also bring significant security challenges. This chapter delves into the dual nature of risks associated with LLMs: inherent vulnerabilities stemming from their design and training, and malicious threats from bad actors seeking to exploit these powerful tools. These risks differ fundamentally from traditional software vulnerabilities. While conventional security issues often stem from coding errors or system misconfigurations that can be patched, LLM vulnerabilities are frequently embedded in the model’s architecture and training process itself. For instance, whereas an SQL injection vulnerability can be fixed by updating code, addressing bias in an LLM may require retraining the entire model with different data, which is a far more complex and resource-intensive undertaking.

...
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
AI-Native LLM Security
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon