Book Image

Machine Learning Using TensorFlow Cookbook

By : Luca Massaron, Alexia Audevart, Konrad Banachewicz
Book Image

Machine Learning Using TensorFlow Cookbook

By: Luca Massaron, Alexia Audevart, Konrad Banachewicz

Overview of this book

The independent recipes in Machine Learning Using TensorFlow Cookbook will teach you how to perform complex data computations and gain valuable insights into your data. Dive into recipes on training models, model evaluation, sentiment analysis, regression analysis, artificial neural networks, and deep learning - each using Google’s machine learning library, TensorFlow. This cookbook covers the fundamentals of the TensorFlow library, including variables, matrices, and various data sources. You’ll discover real-world implementations of Keras and TensorFlow and learn how to use estimators to train linear models and boosted trees, both for classification and regression. Explore the practical applications of a variety of deep learning architectures, such as recurrent neural networks and Transformers, and see how they can be used to solve computer vision and natural language processing (NLP) problems. With the help of this book, you will be proficient in using TensorFlow, understand deep learning from the basics, and be able to implement machine learning algorithms in real-world scenarios.
Table of Contents (15 chapters)
5
Boosted Trees
11
Reinforcement Learning with TensorFlow and TF-Agents
13
Other Books You May Enjoy
14
Index

Working with gates and activation functions

Now that we can link together operational gates, we want to run the computational graph output through an activation function. In this section, we will introduce common activation functions.

Getting ready

In this section, we will compare and contrast two different activation functions: sigmoid and rectified linear unit (ReLU). Recall that the two functions are given by the following equations:

In this example, we will create two one-layer neural networks with the same structure, except that one will feed through the sigmoid activation and one will feed through the ReLU activation. The loss function will be governed by the L2 distance from the value 0.75. We will randomly pull batch data and then optimize the output toward 0.75.

How to do it...

We proceed with the recipe as follows:

  1. We will start by loading the necessary libraries. This is also a good point at which we can...