Book Image

Artificial Intelligence with Python Cookbook

By : Ben Auffarth
Book Image

Artificial Intelligence with Python Cookbook

By: Ben Auffarth

Overview of this book

Artificial intelligence (AI) plays an integral role in automating problem-solving. This involves predicting and classifying data and training agents to execute tasks successfully. This book will teach you how to solve complex problems with the help of independent and insightful recipes ranging from the essentials to advanced methods that have just come out of research. Artificial Intelligence with Python Cookbook starts by showing you how to set up your Python environment and taking you through the fundamentals of data exploration. Moving ahead, you’ll be able to implement heuristic search techniques and genetic algorithms. In addition to this, you'll apply probabilistic models, constraint optimization, and reinforcement learning. As you advance through the book, you'll build deep learning models for text, images, video, and audio, and then delve into algorithmic bias, style transfer, music generation, and AI use cases in the healthcare and insurance industries. Throughout the book, you’ll learn about a variety of tools for problem-solving and gain the knowledge needed to effectively approach complex problems. By the end of this book on AI, you will have the skills you need to write AI and machine learning algorithms, test them, and deploy them for production.
Table of Contents (13 chapters)

Securing a model against attack

Adversarial attacks in ML refer to fooling a model by feeding input with the purpose of deceiving it. Examples of such attacks include adding perturbations to an image by changing a few pixels, thereby causing the classifier to misclassify the sample, or carrying t-shirts with certain patterns to evade person detectors (adversarial t-shirts). One particular kind of adversarial attack is a privacy attack, where a hacker can gain knowledge of the training dataset of the model, potentially exposing personal or sensitive information by membership inference attacks and model inversion attacks.

Privacy attacks are dangerous, particularly in domains such as medical or financial, where the training data can involve sensitive information (for example, a health status) and that is possibly traceable to an individual's identity. In this recipe, we'll build a model that is safe against privacy attacks, and therefore cannot be hacked.

Getting ready