Book Image

Machine Learning for Cybersecurity Cookbook

By : Emmanuel Tsukerman
Book Image

Machine Learning for Cybersecurity Cookbook

By: Emmanuel Tsukerman

Overview of this book

Organizations today face a major threat in terms of cybersecurity, from malicious URLs to credential reuse, and having robust security systems can make all the difference. With this book, you'll learn how to use Python libraries such as TensorFlow and scikit-learn to implement the latest artificial intelligence (AI) techniques and handle challenges faced by cybersecurity researchers. You'll begin by exploring various machine learning (ML) techniques and tips for setting up a secure lab environment. Next, you'll implement key ML algorithms such as clustering, gradient boosting, random forest, and XGBoost. The book will guide you through constructing classifiers and features for malware, which you'll train and test on real samples. As you progress, you'll build self-learning, reliant systems to handle cybersecurity tasks such as identifying malicious URLs, spam email detection, intrusion detection, network protection, and tracking user and process behavior. Later, you'll apply generative adversarial networks (GANs) and autoencoders to advanced security tasks. Finally, you'll delve into secure and private AI to protect the privacy rights of consumers using your ML models. By the end of this book, you'll have the skills you need to tackle real-world problems faced in the cybersecurity domain using a recipe-based approach.
Table of Contents (11 chapters)

Differential privacy using TensorFlow Privacy

TensorFlow Privacy (https://github.com/tensorflow/privacy) is a relatively new addition to the TensorFlow family. This Python library includes implementations of TensorFlow optimizers for training machine learning models with differential privacy. A model that has been trained to be differentially private does not non-trivially change as a result of the removal of any single training instance from its dataset. (Approximate) differential privacy is quantified using epsilon and delta, which give a measure of how sensitive the model is to a change in a single training example. Using the Privacy library is as simple as wrapping the familiar optimizers (for example, RMSprop, Adam, and SGD) to convert them to a differentially private version. This library also provides convenient tools for measuring the privacy guarantees, epsilon,...