Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Adversarial AI Attacks, Mitigations, and Defense Strategies
  • Table Of Contents Toc
Adversarial AI Attacks, Mitigations, and Defense Strategies

Adversarial AI Attacks, Mitigations, and Defense Strategies

By : John Sotiropoulos
4.9 (13)
close
close
Adversarial AI Attacks, Mitigations, and Defense Strategies

Adversarial AI Attacks, Mitigations, and Defense Strategies

4.9 (13)
By: John Sotiropoulos

Overview of this book

Adversarial attacks trick AI systems with malicious data, creating new security risks by exploiting how AI learns. This challenges cybersecurity as it forces us to defend against a whole new kind of threat. This book demystifies adversarial attacks and equips you with the skills to secure AI technologies. Learn how to defend AI and LLM systems against manipulation and intrusion through adversarial attacks such as poisoning, trojan horses, and model extraction, leveraging DevSecOps, MLOps, and other methods to secure systems. This is a comprehensive guide to AI security, combining structured frameworks with practical examples to help you identify and counter adversarial attacks. Part 1 introduces the foundations of AI and adversarial attacks. Parts 2, 3, and 4 cover key attack types, showing how each is performed and how to defend against them. Part 5 presents secure-by-design AI strategies, including threat modeling, MLSecOps, and guidance aligned with OWASP and NIST. The book concludes with a blueprint for maturing enterprise AI security based on NIST pillars, addressing ethics and safety under Trustworthy AI. By the end of this book, you’ll be able to develop, deploy, and secure AI systems against the threat of adversarial attacks effectively. *Email sign-up and proof of purchase required
Table of Contents (28 chapters)
close
close
Lock Free Chapter
1
Part 1: Introduction to Adversarial AI
5
Part 2: Model Development Attacks
9
Part 3: Attacks on Deployed AI
14
Part 4: Generative AI and Adversarial Attacks
21
Part 5: Secure-by-Design AI and MLSecOps

Bypassing security with adversarial AI

We have spent a lot of time securing our adversarial AI playground and our sample AI service. In this section, we will explain how the traditional security controls we have applied are very effective in protecting the environment and artifacts of AI but not the logic embedded in its brain – the ML model.

Our first adversarial AI attack

In this section, we will look at staging our first adversarial AI attack by taking advantage of AI itself to subvert how the model works and demonstrating why we need to cover it when we secure a system or conduct a security risk assessment of it.

Imagine that our ImRecS solution detects airplanes and alerts the Border Control Forces of attempted intrusions. The web application would have to become real-time, but for our security conversations, that’s not all that important. Our service is hardened, and criminals cannot break in and tamper with our model to escape detection regarding illegal...

CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Adversarial AI Attacks, Mitigations, and Defense Strategies
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon