Book Image

R Bioinformatics Cookbook - Second Edition

By : Dan MacLean
Book Image

R Bioinformatics Cookbook - Second Edition

By: Dan MacLean

Overview of this book

The updated second edition of R Bioinformatics Cookbook takes a recipe-based approach to show you how to conduct practical research and analysis in computational biology with R. You’ll learn how to create a useful and modular R working environment, along with loading, cleaning, and analyzing data using the most up-to-date Bioconductor, ggplot2, and tidyverse tools. This book will walk you through the Bioconductor tools necessary for you to understand and carry out protocols in RNA-seq and ChIP-seq, phylogenetics, genomics, gene search, gene annotation, statistical analysis, and sequence analysis. As you advance, you'll find out how to use Quarto to create data-rich reports, presentations, and websites, as well as get a clear understanding of how machine learning techniques can be applied in the bioinformatics domain. The concluding chapters will help you develop proficiency in key skills, such as gene annotation analysis and functional programming in purrr and base R. Finally, you'll discover how to use the latest AI tools, including ChatGPT, to generate, edit, and understand R code and draft workflows for complex analyses. By the end of this book, you'll have gained a solid understanding of the skills and techniques needed to become a bioinformatics specialist and efficiently work with large and complex bioinformatics datasets.
Table of Contents (16 chapters)

Testing the fit of the model using cross-validation

Cross-validation provides a reliable estimate of a model’s performance on unseen data. Simulating the model’s performance on multiple subsets of the data reduces the effect of random variations in the training and testing data splits, providing a more realistic assessment of its generalizability.

K-fold cross-validation involves dividing a dataset into K equally-sized subsets, or folds, where K is a predefined number typically chosen between 5 and 10. The original dataset is randomly partitioned into K subsets of approximately equal size (folds), and a model is trained on K-1 folds and evaluated on the fold left out. This means that K-separate model training and evaluation cycles are performed. The performance values from the K iterations are then averaged to obtain a single metric that represents the overall performance.

Leave-one-out (LOO) cross-validation is a variant of cross-validation where the number of...