Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying The Deep Learning Architect's Handbook
  • Table Of Contents Toc
The Deep Learning Architect's Handbook

The Deep Learning Architect's Handbook

By : Ee Kin Chin
4.8 (9)
close
close
The Deep Learning Architect's Handbook

The Deep Learning Architect's Handbook

4.8 (9)
By: Ee Kin Chin

Overview of this book

Deep learning enables previously unattainable feats in automation, but extracting real-world business value from it is a daunting task. This book will teach you how to build complex deep learning models and gain intuition for structuring your data to accomplish your deep learning objectives. This deep learning book explores every aspect of the deep learning life cycle, from planning and data preparation to model deployment and governance, using real-world scenarios that will take you through creating, deploying, and managing advanced solutions. You’ll also learn how to work with image, audio, text, and video data using deep learning architectures, as well as optimize and evaluate your deep learning models objectively to address issues such as bias, fairness, adversarial attacks, and model transparency. As you progress, you’ll harness the power of AI platforms to streamline the deep learning life cycle and leverage Python libraries and frameworks such as PyTorch, ONNX, Catalyst, MLFlow, Captum, Nvidia Triton, Prometheus, and Grafana to execute efficient deep learning architectures, optimize model performance, and streamline the deployment processes. You’ll also discover the transformative potential of large language models (LLMs) for a wide array of applications. By the end of this book, you'll have mastered deep learning techniques to unlock its full potential for your endeavors.
Table of Contents (25 chapters)
close
close
1
Part 1 – Foundational Methods
11
Part 2 – Multimodal Model Insights
17
Part 3 – DLOps

Exploring autoencoder variations

For tabular data, the network structure can be pretty straightforward. It simply uses an MLP with multiple fully connected layers that gradually shrink the number of features for the encoder, and multiple fully connected layers that gradually increase the data outputs to the same dimension and size as the input for the decoder.

For time-series or sequential data, RNN-based autoencoders can be used. One of the most cited research projects about RNN-based autoencoders is a version where LSTM-based encoders and decoders are used. The research paper is called Sequence to Sequence Learning with Neural Networks by Ilya Sutskever, Oriol Vinyals, and Quoc V. Le (https://arxiv.org/abs/1409.3215). Instead of stacking encoder LSTMs and decoder LSTMs, using the hidden state output sequence of each of the LSTM cells vertically, the decoder layer sequentially continues the sequential flow of the encoder LSTM and outputs the reconstructed input in reversed order...

CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
The Deep Learning Architect's Handbook
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon