Book Image

The Deep Learning with PyTorch Workshop

By : Hyatt Saleh
Book Image

The Deep Learning with PyTorch Workshop

By: Hyatt Saleh

Overview of this book

Want to get to grips with one of the most popular machine learning libraries for deep learning? The Deep Learning with PyTorch Workshop will help you do just that, jumpstarting your knowledge of using PyTorch for deep learning even if you’re starting from scratch. It’s no surprise that deep learning’s popularity has risen steeply in the past few years, thanks to intelligent applications such as self-driving vehicles, chatbots, and voice-activated assistants that are making our lives easier. This book will take you inside the world of deep learning, where you’ll use PyTorch to understand the complexity of neural network architectures. The Deep Learning with PyTorch Workshop starts with an introduction to deep learning and its applications. You’ll explore the syntax of PyTorch and learn how to define a network architecture and train a model. Next, you’ll learn about three main neural network architectures - convolutional, artificial, and recurrent - and even solve real-world data problems using these networks. Later chapters will show you how to create a style transfer model to develop a new image from two images, before finally taking you through how RNNs store memory to solve key data issues. By the end of this book, you’ll have mastered the essential concepts, tools, and libraries of PyTorch to develop your own deep neural networks and intelligent apps.
Table of Contents (8 chapters)

Summary

Deep learning is a subset of machine learning that was inspired by the biological structure of human brains. It uses deep neural networks to solve complex data problems through the use of vast amounts of data. Even though the theory was developed decades ago, it has been used recently thanks to advances in hardware and software that allow us to collect and process millions of pieces of data.

With the popularity of deep learning solutions, many deep learning libraries have been developed. Among them, one of the most recent ones is PyTorch. PyTorch uses a C++ backend, which helps speed up computation, while having a Python frontend to keep the library easy to use.

It uses tensors to store data, which are n-ranked matrix-like structures that can be run on GPUs to speed up processing. It offers three main elements that are highly useful for creating complex neural network architectures with little effort.

The autograd library can compute the derivatives of a function, which are used as the gradients to optimize the weights and biases of a model. Moreover, the nn module helps you to easily define the model's architecture as a sequence of predefined modules, as well as to determine the loss function to be used to measure the model. Finally, the optim package is used to select the optimization algorithm to be used to update the parameters, considering the gradients calculated previously.

In the next chapter, we will learn about the building blocks of a neural network. We will cover the three types of learning processes, as well as the three most common types of neural networks. For each neural network, we will learn how the network architecture is structured, as well as how the training process works. Finally, we will learn about the importance of data preparation and solve a regression data problem.