Book Image

The Deep Learning Architect's Handbook

By : Ee Kin Chin
5 (1)
Book Image

The Deep Learning Architect's Handbook

5 (1)
By: Ee Kin Chin

Overview of this book

Deep learning enables previously unattainable feats in automation, but extracting real-world business value from it is a daunting task. This book will teach you how to build complex deep learning models and gain intuition for structuring your data to accomplish your deep learning objectives. This deep learning book explores every aspect of the deep learning life cycle, from planning and data preparation to model deployment and governance, using real-world scenarios that will take you through creating, deploying, and managing advanced solutions. You’ll also learn how to work with image, audio, text, and video data using deep learning architectures, as well as optimize and evaluate your deep learning models objectively to address issues such as bias, fairness, adversarial attacks, and model transparency. As you progress, you’ll harness the power of AI platforms to streamline the deep learning life cycle and leverage Python libraries and frameworks such as PyTorch, ONNX, Catalyst, MLFlow, Captum, Nvidia Triton, Prometheus, and Grafana to execute efficient deep learning architectures, optimize model performance, and streamline the deployment processes. You’ll also discover the transformative potential of large language models (LLMs) for a wide array of applications. By the end of this book, you'll have mastered deep learning techniques to unlock its full potential for your endeavors.
Table of Contents (25 chapters)
1
Part 1 – Foundational Methods
11
Part 2 – Multimodal Model Insights
17
Part 3 – DLOps

Tailoring bias and fairness measures across use cases

The process of figuring out bias and fairness metrics to use for our use case can flow similarly to the process of figuring out general model performance evaluation metrics, as introduced in Chapter 10, Exploring Model Evaluation Methods, in the Engineering the base model evaluation metric section. So, be sure to check that topic out! However, bias and fairness have unique aspects that require additional heuristical recommendations. Earlier, recommendations for metrics that belong to the same metric group were explored. Now, let’s explore general recommendations on the four metric groups:

  • Equal representation is always desired when there is a sensitive and protected attribute. So, when you see these attributes, be sure to use equal representation-based metrics on both your data and the model. Examples include race, gender, religion, sexual orientation, disability, age, socioeconomic status, political affiliations...