Book Image

The Machine Learning Solutions Architect Handbook

By : David Ping
Book Image

The Machine Learning Solutions Architect Handbook

By: David Ping

Overview of this book

When equipped with a highly scalable machine learning (ML) platform, organizations can quickly scale the delivery of ML products for faster business value realization. There is a huge demand for skilled ML solutions architects in different industries, and this handbook will help you master the design patterns, architectural considerations, and the latest technology insights you’ll need to become one. You’ll start by understanding ML fundamentals and how ML can be applied to solve real-world business problems. Once you've explored a few leading problem-solving ML algorithms, this book will help you tackle data management and get the most out of ML libraries such as TensorFlow and PyTorch. Using open source technology such as Kubernetes/Kubeflow to build a data science environment and ML pipelines will be covered next, before moving on to building an enterprise ML architecture using Amazon Web Services (AWS). You’ll also learn about security and governance considerations, advanced ML engineering techniques, and how to apply bias detection, explainability, and privacy in ML model development. By the end of this book, you’ll be able to design and build an ML platform to support common use cases and architecture patterns like a true professional.
Table of Contents (17 chapters)
1
Section 1: Solving Business Challenges with Machine Learning Solution Architecture
4
Section 2: The Science, Tools, and Infrastructure Platform for Machine Learning
9
Section 3: Technical Architecture Design and Regulatory Considerations for Enterprise ML Platforms

Achieving low latency model inference

As ML models continue to grow and get deployed to different hardware devices, latency can become an issue for certain inference use cases that require low latency and high throughput inferences, such as real-time fraud detection. To reduce the overall model inference latency for a real-time application, there are different optimization considerations and techniques we can use, including model optimization, graph optimization, hardware acceleration, and inference engine optimization. In this section, we will focus on model optimization, graph optimization, and hardware optimization. But first, let's try to understand how model inference works, specifically for DL models, since that's what most of the inference optimization processes focus on.

How model inference works and opportunities for optimization

As we discussed earlier in this book, DL models are constructed as computational graphs with nodes and edges, where the nodes represent...