Book Image

Learning Predictive Analytics with R

By : Eric Mayor
Book Image

Learning Predictive Analytics with R

By: Eric Mayor

Overview of this book

This book is packed with easy-to-follow guidelines that explain the workings of the many key data mining tools of R, which are used to discover knowledge from your data. You will learn how to perform key predictive analytics tasks using R, such as train and test predictive models for classification and regression tasks, score new data sets and so on. All chapters will guide you in acquiring the skills in a practical way. Most chapters also include a theoretical introduction that will sharpen your understanding of the subject matter and invite you to go further. The book familiarizes you with the most common data mining tools of R, such as k-means, hierarchical regression, linear regression, association rules, principal component analysis, multilevel modeling, k-NN, Naïve Bayes, decision trees, and text mining. It also provides a description of visualization techniques using the basic visualization tools of R as well as lattice for visualizing patterns in data organized in groups. This book is invaluable for anyone fascinated by the data mining opportunities offered by GNU R and its packages.
Table of Contents (23 chapters)
Learning Predictive Analytics with R
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Exercises and Solutions
Index

Understanding k-NN


Remember that, in Chapter 4, Cluster Analysis, we discovered that distance matrices are used by k-means to cluster data into a user-specified number of groups of homogenous cases. k-NN uses distances to select the user-defined number of observations that are closest (neighbors) to each of the observations to classify. In k-NN, any attribute can be categorical or numeric, including the target. As we discuss categorization in this chapter, I will limit the description to categorical target (called class attributes).

The classification of a given observation will be made as a majority vote in the neighbors—that is, the most frequent class among the k closest observations. This means that the classification of observations will depend on the chosen number of neighbors. Let's have a look at this. The following figure represents the membership of gray-outlined circles to two class values: the plain grey-lined and the dotted grey-lined. Notice there is one filled grey circle as...