Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying TensorFlow 1.x Deep Learning Cookbook
  • Table Of Contents Toc
  • Feedback & Rating feedback
TensorFlow 1.x Deep Learning Cookbook

TensorFlow 1.x Deep Learning Cookbook

3.4 (16)
close
close
TensorFlow 1.x Deep Learning Cookbook

TensorFlow 1.x Deep Learning Cookbook

3.4 (16)

Overview of this book

Deep neural networks (DNNs) have achieved a lot of success in the field of computer vision, speech recognition, and natural language processing. This exciting recipe-based guide will take you from the realm of DNN theory to implementing them practically to solve real-life problems in the artificial intelligence domain. In this book, you will learn how to efficiently use TensorFlow, Google’s open source framework for deep learning. You will implement different deep learning networks, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Deep Q-learning Networks (DQNs), and Generative Adversarial Networks (GANs), with easy-to-follow standalone recipes. You will learn how to use TensorFlow with Keras as the backend. You will learn how different DNNs perform on some popularly used datasets, such as MNIST, CIFAR-10, and Youtube8m. You will not only learn about the different mobile and embedded platforms supported by TensorFlow, but also how to set up cloud platforms for deep learning applications. You will also get a sneak peek at TPU architecture and how it will affect the future of DNNs. By using crisp, no-nonsense recipes, you will become an expert in implementing deep learning techniques in growing real-world applications and research areas such as reinforcement learning, GANs, and autoencoders.
Table of Contents (15 chapters)
close
close
14
TensorFlow Processing Units

Sparse autoencoder

The autoencoder that we saw in the previous recipe worked more like an identity network--they simply reconstruct the input. The emphasis is to reconstruct the image at the pixel level, and the only constraint is the number of units in the bottleneck layer; while it is interesting, pixel-level reconstruction does not ensure that the network will learn abstract features from the dataset. We can ensure that the network learns abstract features from the dataset by adding further constraints.

In sparse autoencoders, a sparse penalty term is added to the reconstruction error, which tries to ensure that fewer units in the bottleneck layer will fire at any given time. If m is the total number of input patterns, then we can define a quantity ρ_hat (you can check the mathematical details in Andrew Ng's Lecture at https://web.stanford.edu/class/cs294a/sparseAutoencoder_2011new...

Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
TensorFlow 1.x Deep Learning Cookbook
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon