Book Image

Journey to Become a Google Cloud Machine Learning Engineer

By : Dr. Logan Song
Book Image

Journey to Become a Google Cloud Machine Learning Engineer

By: Dr. Logan Song

Overview of this book

This book aims to provide a study guide to learn and master machine learning in Google Cloud: to build a broad and strong knowledge base, train hands-on skills, and get certified as a Google Cloud Machine Learning Engineer. The book is for someone who has the basic Google Cloud Platform (GCP) knowledge and skills, and basic Python programming skills, and wants to learn machine learning in GCP to take their next step toward becoming a Google Cloud Certified Machine Learning professional. The book starts by laying the foundations of Google Cloud Platform and Python programming, followed the by building blocks of machine learning, then focusing on machine learning in Google Cloud, and finally ends the studying for the Google Cloud Machine Learning certification by integrating all the knowledge and skills together. The book is based on the graduate courses the author has been teaching at the University of Texas at Dallas. When going through the chapters, the reader is expected to study the concepts, complete the exercises, understand and practice the labs in the appendices, and study each exam question thoroughly. Then, at the end of the learning journey, you can expect to harvest the knowledge, skills, and a certificate.
Table of Contents (23 chapters)
1
Part 1: Starting with GCP and Python
4
Part 2: Introducing Machine Learning
8
Part 3: Mastering ML in GCP
13
Part 4: Accomplishing GCP ML Certification
15
Part 5: Appendices
Appendix 2: Practicing Using the Python Data Libraries

Long Short-Term Memory Networks

An LSTM network was designed to overcome the vanishing gradient problem. LSTMs have feedback connections, and the key to LSTMs is the cell state—a horizontal line running through the entire chain with only minor linear interactions, which persists the context information. LSTM adds or removes information to the cell state by gates, which are composed of activation functions, such as sigmoid or tanh, and a pointwise multiplication operation.

Figure 5.9 – An LSTM model (source: https://colah.github.io/posts/2015-08-Understanding-LSTMs/)

Figure 5.9 shows an LSTM that has the gates to protect and control the cell state. Using the cell state, LSTM solves the issue of vanishing gradients and thus is particularly good at processing time series sequences of data, such as text and speech inference.