Book Image

The Deep Learning Architect's Handbook

By : Ee Kin Chin
5 (1)
Book Image

The Deep Learning Architect's Handbook

5 (1)
By: Ee Kin Chin

Overview of this book

Deep learning enables previously unattainable feats in automation, but extracting real-world business value from it is a daunting task. This book will teach you how to build complex deep learning models and gain intuition for structuring your data to accomplish your deep learning objectives. This deep learning book explores every aspect of the deep learning life cycle, from planning and data preparation to model deployment and governance, using real-world scenarios that will take you through creating, deploying, and managing advanced solutions. You’ll also learn how to work with image, audio, text, and video data using deep learning architectures, as well as optimize and evaluate your deep learning models objectively to address issues such as bias, fairness, adversarial attacks, and model transparency. As you progress, you’ll harness the power of AI platforms to streamline the deep learning life cycle and leverage Python libraries and frameworks such as PyTorch, ONNX, Catalyst, MLFlow, Captum, Nvidia Triton, Prometheus, and Grafana to execute efficient deep learning architectures, optimize model performance, and streamline the deployment processes. You’ll also discover the transformative potential of large language models (LLMs) for a wide array of applications. By the end of this book, you'll have mastered deep learning techniques to unlock its full potential for your endeavors.
Table of Contents (25 chapters)
1
Part 1 – Foundational Methods
11
Part 2 – Multimodal Model Insights
17
Part 3 – DLOps

Creating general representations through unsupervised deep learning

The representations that are learned through unsupervised deep learning can be directly used as-is in downstream supervised tasks by predictive supervised models or consumed directly by end users. There are a handful of generally impactful unsupervised methods that utilize neural networks that are meant to be used primarily as feature extractors. Let’s take a look at a couple of unsupervised feature extractors:

  • Unsupervised pre-trained word tokenizers: These are used heavily by variants of the transformers architecture and were introduced in Chapter 8, Exploring Supervised Deep Learning, in the Representing text data for supervised deep learning section.
  • Unsupervised pre-trained word embeddings: These methods leverage unsupervised learning and attempt to perform language modeling, similar to masked language modeling in transformers. However, word embeddings-based methods have been overtaken by transformer...