Book Image

Mastering Predictive Analytics with R - Second Edition

By : James D. Miller, Rui Miguel Forte
Book Image

Mastering Predictive Analytics with R - Second Edition

By: James D. Miller, Rui Miguel Forte

Overview of this book

R offers a free and open source environment that is perfect for both learning and deploying predictive modeling solutions. With its constantly growing community and plethora of packages, R offers the functionality to deal with a truly vast array of problems. The book begins with a dedicated chapter on the language of models and the predictive modeling process. You will understand the learning curve and the process of tidying data. Each subsequent chapter tackles a particular type of model, such as neural networks, and focuses on the three important questions of how the model works, how to use R to train it, and how to measure and assess its performance using real-world datasets. How do you train models that can handle really large datasets? This book will also show you just that. Finally, you will tackle the really important topic of deep learning by implementing applications on word embedding and recurrent neural networks. By the end of this book, you will have explored and tested the most popular modeling techniques in use on real- world datasets and mastered a diverse range of techniques in predictive analytics using R.
Table of Contents (22 chapters)
Mastering Predictive Analytics with R Second Edition
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
8
Dimensionality Reduction
Index

Categorizing data quality


It is perhaps an accepted notion that issues with data quality may be categorized into one of the following areas:

  • Accuracy

  • Completeness

  • Update status

  • Relevance

  • Consistency (across sources)

  • Reliability

  • Appropriateness

  • Accessibility

The quality or level of quality of your data can be affected by the way it is entered, stored, and managed. The process of addressing data quality (referred to most often as data quality assurance (DQA)) requires a routine and regular review and evaluation of the data and performing ongoing processes termed profiling and scrubbing (this is vital even if the data is stored in multiple disparate systems, making these processes difficult).

Here, tidying the data will be much more project centric in that we're probably not concerned with creating a formal DQA process, but are only concerned with making certain that the data is correct for your particular predictive project.

In statistics, data unobserved or not yet reviewed by the data scientist is...