Book Image

The Deep Learning Workshop

By : Mirza Rahim Baig, Thomas V. Joseph, Nipun Sadvilkar, Mohan Kumar Silaparasetty, Anthony So
Book Image

The Deep Learning Workshop

By: Mirza Rahim Baig, Thomas V. Joseph, Nipun Sadvilkar, Mohan Kumar Silaparasetty, Anthony So

Overview of this book

Are you fascinated by how deep learning powers intelligent applications such as self-driving cars, virtual assistants, facial recognition devices, and chatbots to process data and solve complex problems? Whether you are familiar with machine learning or are new to this domain, The Deep Learning Workshop will make it easy for you to understand deep learning with the help of interesting examples and exercises throughout. The book starts by highlighting the relationship between deep learning, machine learning, and artificial intelligence and helps you get comfortable with the TensorFlow 2.0 programming structure using hands-on exercises. You’ll understand neural networks, the structure of a perceptron, and how to use TensorFlow to create and train models. The book will then let you explore the fundamentals of computer vision by performing image recognition exercises with convolutional neural networks (CNNs) using Keras. As you advance, you’ll be able to make your model more powerful by implementing text embedding and sequencing the data using popular deep learning solutions. Finally, you’ll get to grips with bidirectional recurrent neural networks (RNNs) and build generative adversarial networks (GANs) for image synthesis. By the end of this deep learning book, you’ll have learned the skills essential for building deep learning models with TensorFlow and Keras.
Table of Contents (9 chapters)

Training a Perceptron

To train a perceptron, we need the following components:

  • Data representation
  • Layers
  • Neural network representation
  • Loss function
  • Optimizer
  • Training loop

In the previous section, we covered most of the preceding components: the data representation of the input data and the true labels in TensorFlow. For layers, we have the linear layer and the activation functions, which we saw in the form of the net input function and the sigmoid function respectively. For the neural network representation, we made a function called perceptron(), which uses a linear layer and a sigmoid layer to perform predictions. What we did in the previous section using input data and initial weights and biases is called forward propagation. The actual neural network training involves two stages: forward propagation and backward propagation. We will explore them in detail in the next few steps. Let's look at the training process at a higher level: