Book Image

The TensorFlow Workshop

By : Matthew Moocarme, Abhranshu Bagchi, Anthony So, Anthony Maddalone
Book Image

The TensorFlow Workshop

By: Matthew Moocarme, Abhranshu Bagchi, Anthony So, Anthony Maddalone

Overview of this book

Getting to grips with tensors, deep learning, and neural networks can be intimidating and confusing for anyone, no matter their experience level. The breadth of information out there, often written at a very high level and aimed at advanced practitioners, can make getting started even more challenging. If this sounds familiar to you, The TensorFlow Workshop is here to help. Combining clear explanations, realistic examples, and plenty of hands-on practice, it’ll quickly get you up and running. You’ll start off with the basics – learning how to load data into TensorFlow, perform tensor operations, and utilize common optimizers and activation functions. As you progress, you’ll experiment with different TensorFlow development tools, including TensorBoard, TensorFlow Hub, and Google Colab, before moving on to solve regression and classification problems with sequential models. Building on this solid foundation, you’ll learn how to tune models and work with different types of neural network, getting hands-on with real-world deep learning applications such as text encoding, temperature forecasting, image augmentation, and audio processing. By the end of this deep learning book, you’ll have the skills, knowledge, and confidence to tackle your own ambitious deep learning projects with TensorFlow.
Table of Contents (13 chapters)
Preface

Activation functions

Activation functions are mathematical functions that are generally applied to the outputs of ANN layers to limit or bound the values of the layer. The reason that values may want to be bounded is that without activation functions, the value and corresponding gradients can either explode or vanish, thereby making the results unusable. This is because the final value is the cumulative product of the values from each subsequent layer. As the number of layers increases, the likelihood of values and gradients exploding to infinity or vanishing to zero increases. This concept is known as the exploding and vanishing gradient problem. Deciding whether a node in a layer should be activated is another use of activation functions, hence their name. Common activation functions and their visual representation in Figure 1.36 are as follows:

  • Step function: The value is non-zero if it is above a certain threshold, otherwise it is zero. This is shown in Figure 1.36a...