Book Image

Hands-On Machine Learning on Google Cloud Platform

By : Giuseppe Ciaburro, V Kishore Ayyadevara, Alexis Perrier
Book Image

Hands-On Machine Learning on Google Cloud Platform

By: Giuseppe Ciaburro, V Kishore Ayyadevara, Alexis Perrier

Overview of this book

Google Cloud Machine Learning Engine combines the services of Google Cloud Platform with the power and flexibility of TensorFlow. With this book, you will not only learn to build and train different complexities of machine learning models at scale but also host them in the cloud to make predictions. This book is focused on making the most of the Google Machine Learning Platform for large datasets and complex problems. You will learn from scratch how to create powerful machine learning based applications for a wide variety of problems by leveraging different data services from the Google Cloud Platform. Applications include NLP, Speech to text, Reinforcement learning, Time series, recommender systems, image classification, video content inference and many other. We will implement a wide variety of deep learning use cases and also make extensive use of data related services comprising the Google Cloud Platform ecosystem such as Firebase, Storage APIs, Datalab and so forth. This will enable you to integrate Machine Learning and data processing features into your web and mobile applications. By the end of this book, you will know the main difficulties that you may encounter and get appropriate strategies to overcome these difficulties and build efficient systems.
Table of Contents (18 chapters)
8
Creating ML Applications with Firebase

Google Cloud Dataflow

Google Cloud Dataflow is a fully managed service for creating data pipelines that transform, enrich, and analyze data in batch and streaming modes. Google Cloud Dataflow extracts useful information from data, reducing operating costs without the hassle of implementing, maintaining, or resizing the data infrastructure.

A pipeline is a set of data processing elements connected in series, in which the output of one element is the input of the next. The data pipeline is implemented to increase throughput, which is the number of instructions executed in a given amount of time, parallelizing the processing flows of multiple instructions.

By appropriately defining a process management flow, significant resources can be saved in extracting knowledge from the data. Thanks to a serverless approach to provisioning and managing resources, Dataflow offers virtually unlimited...