Book Image

Python: Advanced Guide to Artificial Intelligence

By : Giuseppe Bonaccorso, Rajalingappaa Shanmugamani
Book Image

Python: Advanced Guide to Artificial Intelligence

By: Giuseppe Bonaccorso, Rajalingappaa Shanmugamani

Overview of this book

This Learning Path is your complete guide to quickly getting to grips with popular machine learning algorithms. You'll be introduced to the most widely used algorithms in supervised, unsupervised, and semi-supervised machine learning, and learn how to use them in the best possible manner. Ranging from Bayesian models to the MCMC algorithm to Hidden Markov models, this Learning Path will teach you how to extract features from your dataset and perform dimensionality reduction by making use of Python-based libraries. You'll bring the use of TensorFlow and Keras to build deep learning models, using concepts such as transfer learning, generative adversarial networks, and deep reinforcement learning. Next, you'll learn the advanced features of TensorFlow1.x, such as distributed TensorFlow with TF clusters, deploy production models with TensorFlow Serving. You'll implement different techniques related to object classification, object detection, image segmentation, and more. By the end of this Learning Path, you'll have obtained in-depth knowledge of TensorFlow, making you the go-to person for solving artificial intelligence problems This Learning Path includes content from the following Packt products: • Mastering Machine Learning Algorithms by Giuseppe Bonaccorso • Mastering TensorFlow 1.x by Armando Fandango • Deep Learning for Computer Vision by Rajalingappaa Shanmugamani
Table of Contents (31 chapters)
Title Page
About Packt
Contributors
Preface
19
Tensor Processing Units
Index

Strategies for distributed execution


For distributing the training of the single model across multiple devices or nodes, there are the following strategies:

  • Model Parallel: Divide the model into multiple subgraphs and place the separate graphs on different nodes or devices. The subgraphs perform their computation and exchange the variables as required.
  • Data Parallel: Divide the data into batches and run the same model on multiple nodes or devices, combining the parameters on a master node. Thus the worker nodes train the model on batches of data and send the parameter updates to the master node, also known as the parameter server.

The preceding diagram shows the data parallel approach where the model replicas read the partitions of data in batches and send the parameter updates to the parameter servers, and parameter servers send the updated parameters back to the model replicas for the next batched computation of updates.

In TensorFlow, there are two ways to implement replicating the model...