Book Image

TensorFlow 1.x Deep Learning Cookbook

Book Image

TensorFlow 1.x Deep Learning Cookbook

Overview of this book

Deep neural networks (DNNs) have achieved a lot of success in the field of computer vision, speech recognition, and natural language processing. This exciting recipe-based guide will take you from the realm of DNN theory to implementing them practically to solve real-life problems in the artificial intelligence domain. In this book, you will learn how to efficiently use TensorFlow, Google’s open source framework for deep learning. You will implement different deep learning networks, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Deep Q-learning Networks (DQNs), and Generative Adversarial Networks (GANs), with easy-to-follow standalone recipes. You will learn how to use TensorFlow with Keras as the backend. You will learn how different DNNs perform on some popularly used datasets, such as MNIST, CIFAR-10, and Youtube8m. You will not only learn about the different mobile and embedded platforms supported by TensorFlow, but also how to set up cloud platforms for deep learning applications. You will also get a sneak peek at TPU architecture and how it will affect the future of DNNs. By using crisp, no-nonsense recipes, you will become an expert in implementing deep learning techniques in growing real-world applications and research areas such as reinforcement learning, GANs, and autoencoders.
Table of Contents (15 chapters)
14
TensorFlow Processing Units

Choosing loss functions

As discussed earlier, in regression we define the loss function or objective function and the aim is to find the coefficients such that the loss is minimized. In this recipe, you will learn how to define loss functions in TensorFlow and choose a proper loss function depending on the problem at hand.

Getting ready

Declaring a loss function requires defining the coefficients as variables and the dataset as placeholders. One can have a constant learning rate or changing learning rate and regularization constant. In the following code, let m be the number of samples, n the number of features, and P the number of classes. We should define these global parameters before the code:

m = 1000
n = 15
P = 2
...