Book Image

The Applied TensorFlow and Keras Workshop

By : Harveen Singh Chadha, Luis Capelo
Book Image

The Applied TensorFlow and Keras Workshop

By: Harveen Singh Chadha, Luis Capelo

Overview of this book

Machine learning gives computers the ability to learn like humans. It is becoming increasingly transformational to businesses in many forms, and a key skill to learn to prepare for the future digital economy. As a beginner, you’ll unlock a world of opportunities by learning the techniques you need to contribute to the domains of machine learning, deep learning, and modern data analysis using the latest cutting-edge tools. The Applied TensorFlow and Keras Workshop begins by showing you how neural networks work. After you’ve understood the basics, you will train a few networks by altering their hyperparameters. To build on your skills, you’ll learn how to select the most appropriate model to solve the problem in hand. While tackling advanced concepts, you’ll discover how to assemble a deep learning system by bringing together all the essential elements necessary for building a basic deep learning system - data, model, and prediction. Finally, you’ll explore ways to evaluate the performance of your model, and improve it using techniques such as model evaluation and hyperparameter optimization. By the end of this book, you'll have learned how to build a Bitcoin app that predicts future prices, and be able to build your own models for other projects.
Table of Contents (6 chapters)

What are Neural Networks?

A neural network is a network of neurons. In our brain, we have a network of billions of neurons that are interconnected with each other. The neuron is one of the basic elements of the nervous system. The primary function of the neuron is to perform actions as a response to an event and transmit messages to other neurons. In this case, the action is simply either activating or deactivating itself. Taking inspiration from the brain's design, artificial neural networks were first proposed in the 1940s by MIT professors Warren McCullough and Walter Pitts.

Note

For more information on neural networks, refer to Explained: Neural networks. MIT News Office, April 14, 2017, available at http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414.

Inspired by advancements in neuroscience, they proposed to create a computer system that reproduced how the brain works (human or otherwise). At its core was the idea of a computer system that worked as an interconnected network, that is, a system that has many simple components. These components interpret data and influence each other on how to interpret that data. The same core idea r emains today.

Deep learning is largely considered the contemporary study of neural networks. Think of it as a contemporary name given to neural networks. The main difference is that the neural networks used in deep learning are typically far greater in size, meaning they have many more nodes and layers than earlier neural networks. Deep learning algorithms and applications typically require resources to achieve success, hence the use of the word deep to emphasize their size and the large number of interconnected components.

Successful Applications of Neural Networks

Neural networks have been under research in one form or another since their inception in the 1940s. It is only recently that deep learning systems have been used successfully in large-scale industry applications.

Contemporary proponents of neural networks have demonstrated great success in speech recognition, language translation, image classification, and other fields. Its current prominence is backed by a significant increase in available computing power and the emergence of Graphic Processing Units (GPUs) and Tensor Processing Units (TPUs), which can perform many more simultaneous mathematical operations than regular CPUs, as well as much greater availability of data. Compared to CPUs, GPUs are designed to execute special tasks (in the "single instruction, multiple threads" model) where the execution can be parallelized.

One such success story is the power consumption of different AlphaGo algorithms. AlphaGo is an initiative by DeepMind to develop a series of algorithms to beat the game Go. It is considered a prime example of the power of deep learning. The team at DeepMind was able to do this using reinforcement learning in which AlphaGo becomes its own teacher.

The neural network, which initially knows nothing, plays with itself to understand which moves lead to victory. The algorithm used TPUs for training. TPUs are a type of chipset developed by Google that are specialized for use in deep learning programs. The article Alpha Zero: Starting from scratch, https://deepmind.com/blog/alphago-zero-learning-scratch/, depicts the number of GPUs and TPUs used to train different versions of the AlphaGo algorithm.

Note

In this book, we will not be using GPUs to fulfill our activities. GPUs are not required to work with neural networks. In several simple examples—like the ones provided in this book—all computations can be performed using a simple laptop's CPU. However, when dealing with very large datasets, GPUs can be of great help given that the long time taken to train a neural network would otherwise be impractical.

Here are a few examples where neural networks have had a significant impact:

Translating text: In 2017, Google announced the release of a new algorithm for its translation service called Transformer. The algorithm consisted of a recurrent neural network called Long Short-term Memory (LSTM) that was trained to use bilingual text. LSTM is a form of neural network that is applied to text data. Google showed that its algorithm had gained notable accuracy when compared to the industry standard, Bilingual Evaluation Understudy (BLEU), and was also computationally efficient. BLEU is an algorithm for evaluating the performance of machine-translated text. For more information on this, refer to the Google Research Blog, Transformer: A Novel Neural Network Architecture for Language Understanding, August 31, 2017, available at https://research.googleblog.com/2017/08/transformer-novel-neural-network.html.

Self-driving vehicles: Uber, NVIDIA, and Waymo are believed to be using deep learning models to control different vehicle functions related to driving. Each company is researching several possibilities, including training the network using humans, simulating vehicles driving in virtual environments, and even creating a small city-like environment in which vehicles can be trained based on expected and unexpected events.

Note

To know more about each of these achievements, refer to the following references.

Uber: Uber's new AI team is looking for the shortest route to self-driving cars, Dave Gershgorn, Quartz, December 5, 2016, available at https://qz.com/853236/ubers-new-ai-team-is-looking-for-the-shortest-route-to-self-driving-cars/.

NVIDIA: End-to-End Deep Learning for Self-Driving Cars, August 17, 2016, available at https://devblogs.nvidia.com/deep-learning-self-driving-cars/.

Waymo: Inside Waymo's Secret World for Training Self-Driving Cars. The Atlantic, Alexis C. Madrigal, August 23, 2017, available at https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/.

Image recognition: Facebook and Google use deep learning models to identify entities in images and automatically tag these entities as persons from a set of contacts. In both cases, the networks are trained with previously tagged images as well as with images from the target friend or contact. Both companies report that the models can suggest a friend or contact with a high level of accuracy in most cases.

While there are many more examples in other industries, the application of deep learning models is still in its infancy. Many successful applications are yet to come, including the ones that you create.