Book Image

Deep Learning for Genomics

By : Upendra Kumar Devisetty
Book Image

Deep Learning for Genomics

By: Upendra Kumar Devisetty

Overview of this book

Deep learning has shown remarkable promise in the field of genomics; however, there is a lack of a skilled deep learning workforce in this discipline. This book will help researchers and data scientists to stand out from the rest of the crowd and solve real-world problems in genomics by developing the necessary skill set. Starting with an introduction to the essential concepts, this book highlights the power of deep learning in handling big data in genomics. First, you’ll learn about conventional genomics analysis, then transition to state-of-the-art machine learning-based genomics applications, and finally dive into deep learning approaches for genomics. The book covers all of the important deep learning algorithms commonly used by the research community and goes into the details of what they are, how they work, and their practical applications in genomics. The book dedicates an entire section to operationalizing deep learning models, which will provide the necessary hands-on tutorials for researchers and any deep learning practitioners to build, tune, interpret, deploy, evaluate, and monitor deep learning models from genomics big data sets. By the end of this book, you’ll have learned about the challenges, best practices, and pitfalls of deep learning for genomics.
Table of Contents (18 chapters)
1
Part 1 – Machine Learning in Genomics
5
Part 2 – Deep Learning for Genomic Applications
11
Part 3 – Operationalizing models

Introduction to CNNs

Just to refresh your memory, FNNs are fully interconnected NNs where all nodes in the preceding layer are connected to every other node neuron in the next subsequent layer, and so on (Figure 5.1). Each edge or connection has a weight, that is either initialized randomly or derived from domain knowledge and ultimately learned by the algorithm during model training. The weights are then multiplied by the input values from all the node’s neurons, and then the sum of all the nodes in the input layers is then passed on to the next layer, along with a bias that is then used by an activation function to signal whether that output will be passed on to the next layer or not. The process repeats in each layer until the final output layer, which has one to many neurons, depending on the type of learning and whether generating a prediction or classification. FNNs work great for structured data where you have features and samples as input:

Figure...