Book Image

Deep Learning for Beginners

By : Dr. Pablo Rivas
Book Image

Deep Learning for Beginners

By: Dr. Pablo Rivas

Overview of this book

With information on the web exponentially increasing, it has become more difficult than ever to navigate through everything to find reliable content that will help you get started with deep learning. This book is designed to help you if you're a beginner looking to work on deep learning and build deep learning models from scratch, and you already have the basic mathematical and programming knowledge required to get started. The book begins with a basic overview of machine learning, guiding you through setting up popular Python frameworks. You will also understand how to prepare data by cleaning and preprocessing it for deep learning, and gradually go on to explore neural networks. A dedicated section will give you insights into the working of neural networks by helping you get hands-on with training single and multiple layers of neurons. Later, you will cover popular neural network architectures such as CNNs, RNNs, AEs, VAEs, and GANs with the help of simple examples, and learn how to build models from scratch. At the end of each chapter, you will find a question and answer section to help you test what you've learned through the course of the book. By the end of this book, you'll be well-versed with deep learning concepts and have the knowledge you need to use specific algorithms with various tools for different tasks.
Table of Contents (20 chapters)
1
Section 1: Getting Up to Speed
8
Section 2: Unsupervised Deep Learning
13
Section 3: Supervised Deep Learning

Questions and answers

  1. Why is the MLP better than the perceptron model?

The larger number and layers of neurons give the MLP the advantage over the perceptron to model non-linear problems and solve much more complicated pattern recognition problems.

  1. Why is backpropagation so important to know about?

Because it is what makes neural networks learn in the era of big data.

  1. Does the MLP always converge?

Yes and no. It does always converge to a local minimum in terms of the loss function; however, it is not guaranteed to converge to a global minimum since, usually, most loss functions are non-convex and non-smooth.

  1. Why should we try to optimize the hyperparameters of our models?

Because anyone can train a simple neural network; however, not everyone knows what things to change to make it better. The success of your model depends heavily on you trying different things and proving to yourself (and others) that your model is the best that it can be. This is what will make you a better...