Book Image

Machine Learning Security Principles

By : John Paul Mueller
Book Image

Machine Learning Security Principles

By: John Paul Mueller

Overview of this book

Businesses are leveraging the power of AI to make undertakings that used to be complicated and pricy much easier, faster, and cheaper. The first part of this book will explore these processes in more depth, which will help you in understanding the role security plays in machine learning. As you progress to the second part, you’ll learn more about the environments where ML is commonly used and dive into the security threats that plague them using code, graphics, and real-world references. The next part of the book will guide you through the process of detecting hacker behaviors in the modern computing environment, where fraud takes many forms in ML, from gaining sales through fake reviews to destroying an adversary’s reputation. Once you’ve understood hacker goals and detection techniques, you’ll learn about the ramifications of deep fakes, followed by mitigation strategies. This book also takes you through best practices for embracing ethical data sourcing, which reduces the security risk associated with data. You’ll see how the simple act of removing personally identifiable information (PII) from a dataset lowers the risk of social engineering attacks. By the end of this machine learning book, you'll have an increased awareness of the various attacks and the techniques to secure your ML systems effectively.
Table of Contents (19 chapters)
1
Part 1 – Securing a Machine Learning System
5
Part 2 – Creating a Secure System Using ML
12
Part 3 – Protecting against ML-Driven Attacks
15
Part 4 – Performing ML Tasks in an Ethical Manner

Defining a deepfake

A deepfake (sometimes deep fake) is an application of deep learning to images, sound, video, and other forms of generally non-textual information to make one thing look or sound like something else. The idea is to deceive someone into thinking a thing is something that it’s not.

This chapter doesn’t mean to imply that the use of deepfakes will always deceive others in a bad way. For example, it’s perfectly acceptable to take a family picture, then put it through an autoencoder or a GAN and make it look like a Renoir painting. In fact, some deepfakes are amusing or even educational. The point at which a deepfake becomes a problem is when it’s used to bypass security or perform other seemingly impossible tasks. In a court of law, a deepfake video could convince a jury to convict someone who is innocent. Throughout the following sections, you will learn more about deepfakes from an ML security perspective.

Identifying deepfakes

...