Book Image

The Deep Learning Architect's Handbook

By : Ee Kin Chin
5 (1)
Book Image

The Deep Learning Architect's Handbook

5 (1)
By: Ee Kin Chin

Overview of this book

Deep learning enables previously unattainable feats in automation, but extracting real-world business value from it is a daunting task. This book will teach you how to build complex deep learning models and gain intuition for structuring your data to accomplish your deep learning objectives. This deep learning book explores every aspect of the deep learning life cycle, from planning and data preparation to model deployment and governance, using real-world scenarios that will take you through creating, deploying, and managing advanced solutions. You’ll also learn how to work with image, audio, text, and video data using deep learning architectures, as well as optimize and evaluate your deep learning models objectively to address issues such as bias, fairness, adversarial attacks, and model transparency. As you progress, you’ll harness the power of AI platforms to streamline the deep learning life cycle and leverage Python libraries and frameworks such as PyTorch, ONNX, Catalyst, MLFlow, Captum, Nvidia Triton, Prometheus, and Grafana to execute efficient deep learning architectures, optimize model performance, and streamline the deployment processes. You’ll also discover the transformative potential of large language models (LLMs) for a wide array of applications. By the end of this book, you'll have mastered deep learning techniques to unlock its full potential for your endeavors.
Table of Contents (25 chapters)
1
Part 1 – Foundational Methods
11
Part 2 – Multimodal Model Insights
17
Part 3 – DLOps

Exploring the foundations of neural networks using an MLP

A deep learning architecture is created when at least three perceptron layers are used, excluding the input layer. A perceptron is a single-layer network consisting of neuron units. Neuron units hold a bias variable and act as nodes for vertices to be connected. These neurons will interact with other neurons in a separate layer with weights applied to the connections/vertices between neurons. A perceptron is also known as a fully connected layer or dense layer, and MLPs are also known as feedforward neural networks or fully connected neural networks.

Let’s refer back to the MLP figure from the previous chapter to get a better idea.

Figure 2.1 – Simple deep learning architecture, also called an MLP

Figure 2.1 – Simple deep learning architecture, also called an MLP

The figure shows how three data column inputs get passed into the input layer, then subsequently get propagated to the hidden layer, and finally, through the output layer. Although not...