Book Image

The Artificial Intelligence Infrastructure Workshop

By : Chinmay Arankalle, Gareth Dwyer, Bas Geerdink, Kunal Gera, Kevin Liao, Anand N.S.
Book Image

The Artificial Intelligence Infrastructure Workshop

By: Chinmay Arankalle, Gareth Dwyer, Bas Geerdink, Kunal Gera, Kevin Liao, Anand N.S.

Overview of this book

Social networking sites see an average of 350 million uploads daily - a quantity impossible for humans to scan and analyze. Only AI can do this job at the required speed, and to leverage an AI application at its full potential, you need an efficient and scalable data storage pipeline. The Artificial Intelligence Infrastructure Workshop will teach you how to build and manage one. The Artificial Intelligence Infrastructure Workshop begins taking you through some real-world applications of AI. You’ll explore the layers of a data lake and get to grips with security, scalability, and maintainability. With the help of hands-on exercises, you’ll learn how to define the requirements for AI applications in your organization. This AI book will show you how to select a database for your system and run common queries on databases such as MySQL, MongoDB, and Cassandra. You’ll also design your own AI trading system to get a feel of the pipeline-based architecture. As you learn to implement a deep Q-learning algorithm to play the CartPole game, you’ll gain hands-on experience with PyTorch. Finally, you’ll explore ways to run machine learning models in production as part of an AI application. By the end of the book, you’ll have learned how to build and deploy your own AI software at scale, using various tools, API frameworks, and serialization methods.
Table of Contents (14 chapters)
Preface
4
4. The Ethics of AI Data Storage

Introduction

In the previous chapter, we learned how to create our own data pipelines and use Airflow to automate our jobs. This enables us to create more data pipelines at scale, which is great. However, when we start scaling the number of data pipelines in our local machine, we will quickly run into scaling issues due to the limitations of a single machine. A single machine might give you 16 CPUs and 32 GB of RAM, which allows up to 16 different data pipelines running in parallel providing a memory footprint of less than 32 GB. In reality, AI engineers need to run hundreds of data pipelines every day to train models, predict data, monitor system health, and so on. Therefore, we need many more machines to support operations on such a scale.

Nowadays, software engineers are building their applications on the cloud. There are many benefits to building applications on the cloud. Some of them are as follows:

  • The cloud is flexible. We can scale the capacity up or down as we...