Book Image

Mastering Machine Learning for Penetration Testing

By : Chiheb Chebbi
Book Image

Mastering Machine Learning for Penetration Testing

By: Chiheb Chebbi

Overview of this book

Cyber security is crucial for both businesses and individuals. As systems are getting smarter, we now see machine learning interrupting computer security. With the adoption of machine learning in upcoming security products, it’s important for pentesters and security researchers to understand how these systems work, and to breach them for testing purposes. This book begins with the basics of machine learning and the algorithms used to build robust systems. Once you’ve gained a fair understanding of how security products leverage machine learning, you'll dive into the core concepts of breaching such systems. Through practical use cases, you’ll see how to find loopholes and surpass a self-learning security system. As you make your way through the chapters, you’ll focus on topics such as network intrusion detection and AV and IDS evasion. We’ll also cover the best practices when identifying ambiguities, and extensive techniques to breach an intelligent system. By the end of this book, you will be well-versed with identifying loopholes in a self-learning security system and will be able to efficiently breach a machine learning system.
Table of Contents (13 chapters)

Evading intrusion detection systems with adversarial network systems

By now, you will have acquired a fair understanding of adversarial machine learning, and how to attack machine learning models. It's time to dive deep into more technical details, learning how to bypass machine learning based intrusion detection systems with Python. You will also learn how to defend against those attacks.

In this demonstration, you are going to learn how to attack the model with a poisoning attack. As discussed previously, we are going to inject malicious data, so that we can influence the learning outcome of the model. The following diagram illustrates how the poisoning attack will occur:

In this attack, we are going to use a Jacobian-Based Saliency Map Attack (JSMA). This is done by searching for adversarial examples by modifying only a limited number of pixels in an input.

Let's...