Book Image

The Artificial Intelligence Infrastructure Workshop

By : Chinmay Arankalle, Gareth Dwyer, Bas Geerdink, Kunal Gera, Kevin Liao, Anand N.S.
Book Image

The Artificial Intelligence Infrastructure Workshop

By: Chinmay Arankalle, Gareth Dwyer, Bas Geerdink, Kunal Gera, Kevin Liao, Anand N.S.

Overview of this book

Social networking sites see an average of 350 million uploads daily - a quantity impossible for humans to scan and analyze. Only AI can do this job at the required speed, and to leverage an AI application at its full potential, you need an efficient and scalable data storage pipeline. The Artificial Intelligence Infrastructure Workshop will teach you how to build and manage one. The Artificial Intelligence Infrastructure Workshop begins taking you through some real-world applications of AI. You’ll explore the layers of a data lake and get to grips with security, scalability, and maintainability. With the help of hands-on exercises, you’ll learn how to define the requirements for AI applications in your organization. This AI book will show you how to select a database for your system and run common queries on databases such as MySQL, MongoDB, and Cassandra. You’ll also design your own AI trading system to get a feel of the pipeline-based architecture. As you learn to implement a deep Q-learning algorithm to play the CartPole game, you’ll gain hands-on experience with PyTorch. Finally, you’ll explore ways to run machine learning models in production as part of an AI application. By the end of the book, you’ll have learned how to build and deploy your own AI software at scale, using various tools, API frameworks, and serialization methods.
Table of Contents (14 chapters)
4. The Ethics of AI Data Storage

Problems Solved by Machine Learning

Before we get our hands too dirty with learning how to store and process machine learning data in efficient ways, let's take a step back. What kinds of real-world problems can we solve using machine learning?

Machine learning is not a new concept, but with new algorithms and better hardware, it has seen a resurgence over the last few years. This means it has received significant attention from many diverse fields. Although the applications of AI are almost uncountable, nearly all of these stem from a far smaller number of subfields within machine learning.

With that in mind, we'll start by examining one problem that machine learning can solve in each of the main subfields of image processing, text and language processing, audio processing, and time series analysis. Some problems, such as navigation systems, need to combine many of these fields, but the fundamental concepts remain very similar. We'll start by looking at image processing: we do not fully understand all the complexities behind how humans see, so helping computers 'see' is a particularly challenging task.

Image Processing – Detecting Cancer in Mammograms with Computer Vision

For classification tasks, the goal is to look at some data and decide which class it belongs to. In simple cases, there are only two classes: positive and negative. When a doctor looks at a mammogram to assess whether a patient has cancer, the doctor is looking for specific patterns and signs. The doctor then makes a diagnosis based on the patterns.

As a slight simplification, a doctor looks at a mammogram X-ray and classifies it into one of two classes: 'cancer' (positive) or 'healthy' (negative). Using image processing and machine learning, a computer can be trained to do the same thing. The computer is fed thousands or millions of X-rays in the form of digital images. Interpreting these as a set of matrices with associated labels, the computer uses a machine-learning algorithm to learn which patterns are indicative of cancer and which are not.

It's an emerging field, but a very promising-looking one. In January 2020, Google published a paper titled International evaluation of an AI system for breast cancer screening, which showed results that indicated their AI system was able to identify cancer in mammograms not only faster but also more reliably than human doctors.

Although images and language may seem very different, many techniques from image processing can be used to help machines better understand and learn human languages. Let's take a look at how AI has advanced the field of Natural Language Processing (NLP).

Text and Language Processing – Google Translate

While computers are great at repetitive mechanical tasks such as solving well-defined equations, and humans are better at more creative tasks such as drawing, it is likely that if computers and humans could work better together, their complementary skills would be more valuable than they are individually. How can we help machines and humans work better together? A desirable approach is to allow computers to act more like humans, fostering closer collaboration.

To this end, we have tried to make computers take on traditionally 'human' characteristics, such as the following:

  • Look like us: In 2016, David Hanson created a human-like 'social' robot named Sophia, which, apart from having a transparent skull, looks like a human female, and is partially modeled on Audrey Hepburn:
    Figure 1.1: Sophia – The first robot citizen at the AI for Good Global Summit 2018 (ITU pictures)

Figure 1.1: Sophia – The first robot citizen at the AI for Good Global Summit 2018 (ITU pictures)

  • Walk like us: Shortly after Sophia was shown to the world, Agility Robotics released 'Cassie' – a robot that looks far less human than Sophia but can walk on two legs in a very similar way to humans:
    Figure 1.2: Cassie, a walking robot, photo by Oregon State University (CCSA)

Figure 1.2: Cassie, a walking robot, photo by Oregon State University (CCSA)

  • Play a wide variety of games: Computers can now beat even the best humans at rock, paper, scissors; chess; Go; and Super Mario Bros:
    Figure 1.3: Rock, paper, scissors (OpenClipart-Vectors)

Figure 1.3: Rock, paper, scissors (OpenClipart-Vectors)

But making computers talk like us is hard, and making them understand us is still an unsolved problem and an area of active research.

That said, there is strong progress, especially in the field of machine translation, which is one form of language understanding. Machine translation algorithms can take a written text in one language and output the equivalent text in another language – for example, if you want to read a news article in French but you can only speak English, you can simply paste the article into Google Translate and it will spit out almost perfect English.

As with other machine learning systems, a vital ingredient for machine translation is a huge dataset. And hand-in-hand with a huge dataset, we need optimized data structures and storage methods to successfully create such a system. There are thousands of reasons why you might want to read a text in a language that you do not understand, from ordering at a foreign restaurant to studying old literature to conducting business with people in other countries.

A good example of machine translation in action is eBay, which improved its automatic translation capabilities in 2014. Imagine being a native Spanish speaker based in Latin America buying goods online from a native English speaker based in the US. You'd want to search for products in the language that you are most comfortable with, and you would want to read the details about the product, its condition, and shipping possibilities in Spanish too. Due to the large amount of eBay sales between Latin America and the USA, eBay tried to solve exactly this problem using AI. After improving its machine translation systems, eBay – as shown in the study "Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform" – saw a 10.9 percent increase in purchases where the seller and buyer spoke different languages.

The translation of text is complicated, but at least writing is consistent. Spoken language can be even more complicated due to the complexities of sound waves, different accents, and different voice pitches: let's take a look at audio processing.

Audio Processing – Automatically Generated Subtitles

Subtitles on videos are very useful. They help deaf people access video content and also allow video content to be shared across language barriers. The problem with subtitles is that they are difficult to create. Traditionally, to create subtitles, a person with specialized knowledge had to watch an entire video, potentially multiple times, typing out every audible word. Then each word had to be carefully aligned to the correct timestamp in the video file. How could we create subtitles for every video on YouTube? Once again, AI can come to our aid.

Years ago, Google introduced YouTube videos with automatically generated captions, and these have steadily improved in quality. Being able to read what people are saying as they talk is useful for millions of hard of hearing people and billions of people listening to audio or video content in their second or third language.

Similarly, California State University has used automatic captions to make their content available for deaf people.

We have now seen how AI can help computers act more like humans, but AI can also help computers be more efficient at other tasks, such as mathematics and analysis, including time series analysis, which is used across many fields. Let's study it in the next section.

Time Series Analysis

Seeing how machines can help us with health, communication, and disabilities might already make AI seem almost magical, but another area where AI shines is predicting the future. A common method for forecasting is time series analysis, which involves studying historical data, looking for trends, and assuming that these will hold in the future as well.

In an arguably less noble pursuit than medical advances, one of the most popular applications for time series analysis is in financial trading. If we can predict the rise and fall of stock prices, then we can be rich (as long as we don't share our knowledge too widely).

Despite decades of research and many attempts, it is not completely clear whether machines can reliably turn data directly into money by trading on global stock markets. Nonetheless, billions and potentially trillions of dollars change hands automatically every day, powered by AI predicting which assets will be valuable in the future.