#### Overview of this book

This book will teach you advanced techniques in machine learning with the latest code in R 3.3.2. You will delve into statistical learning theory and supervised learning; design efficient algorithms; learn about creating Recommendation Engines; use multi-class classification and deep learning; and more. You will explore, in depth, topics such as data mining, classification, clustering, regression, predictive modeling, anomaly detection, boosted trees with XGBOOST, and more. More than just knowing the outcome, you’ll understand how these concepts work and what they do. With a slow learning curve on topics such as neural networks, you will explore deep learning, and more. By the end of this book, you will be able to perform machine learning with R in the cloud using AWS in various scenarios with different datasets.
Title Page
Credits
Packt Upsell
Customer Feedback
Preface
Free Chapter
A Process for Success
Linear Regression - The Blocking and Tackling of Machine Learning
Logistic Regression and Discriminant Analysis
Advanced Feature Selection in Linear Models
More Classification Techniques - K-Nearest Neighbors and Support Vector Machines
Classification and Regression Trees
Neural Networks and Deep Learning
Cluster Analysis
Principal Components Analysis
Market Basket Analysis, Recommendation Engines, and Sequential Analysis
Creating Ensembles and Multiclass Classification
Time Series and Causality
Text Mining
R on the Cloud
R Fundamentals
Sources

## Regularization and classification

The regularization techniques applied above will also work for classification problems, both binomial and multinomial.  Therefore, let's not conclude this chapter until we apply some sample code on a logistic regression problem, specifically the breast cancer data from the prior chapter.  As in regression with a quantitative response, this can be an important technique to utilize data sets with high dimensionality.

### Logistic regression example

Recall that, in the breast cancer data we analyzed, the probability of a tumor being malignant can be denoted as follows in a logistic function:

P(malignant) = 1 / 1 + e-(B0 + B1X1 + BnXn)

Since we have a linear component in the function, L1 and L2 regularization can be applied. To demonstrate this, let's load and prepare the breast cancer data like we did in the previous chapter:

```    > library(MASS)
> biopsy\$ID = NULL
> names(biopsy) = c("thick", "u.size", "u.shape", "adhsn",
"s.size", "nucl", "chrom...```