Book Image

The Artificial Intelligence Infrastructure Workshop

By : Chinmay Arankalle, Gareth Dwyer, Bas Geerdink, Kunal Gera, Kevin Liao, Anand N.S.
Book Image

The Artificial Intelligence Infrastructure Workshop

By: Chinmay Arankalle, Gareth Dwyer, Bas Geerdink, Kunal Gera, Kevin Liao, Anand N.S.

Overview of this book

Social networking sites see an average of 350 million uploads daily - a quantity impossible for humans to scan and analyze. Only AI can do this job at the required speed, and to leverage an AI application at its full potential, you need an efficient and scalable data storage pipeline. The Artificial Intelligence Infrastructure Workshop will teach you how to build and manage one. The Artificial Intelligence Infrastructure Workshop begins taking you through some real-world applications of AI. You’ll explore the layers of a data lake and get to grips with security, scalability, and maintainability. With the help of hands-on exercises, you’ll learn how to define the requirements for AI applications in your organization. This AI book will show you how to select a database for your system and run common queries on databases such as MySQL, MongoDB, and Cassandra. You’ll also design your own AI trading system to get a feel of the pipeline-based architecture. As you learn to implement a deep Q-learning algorithm to play the CartPole game, you’ll gain hands-on experience with PyTorch. Finally, you’ll explore ways to run machine learning models in production as part of an AI application. By the end of the book, you’ll have learned how to build and deploy your own AI software at scale, using various tools, API frameworks, and serialization methods.
Table of Contents (14 chapters)
Preface
4
4. The Ethics of AI Data Storage

Creating Your Data Pipeline

If you are interested in data science, you should be somewhat familiar with typical data pipelines. A data pipeline starts with raw data. In this chapter, we will be using data on trending videos in the US for the period from 2017 to 2019. Let's say this raw data is in a flat-file CSV format. There are several columns for each entry. However, not all of the columns of this data are relevant to our data pipeline. We need to only select the columns that are required for our purposes. This step involves cleaning the data from the source files and is called data processing. After data processing, we need to store the clean data in our databases. We select the most appropriate model based on our needs, according to what we found out about when looking at data modeling in Chapter 5, Data Stores: SQL and NoSQL Databases. Then, we will perform some queries to deploy this data into production. When the data is up and running, we'll continue monitoring the...