#### Overview of this book

This book will teach you advanced techniques in machine learning with the latest code in R 3.3.2. You will delve into statistical learning theory and supervised learning; design efficient algorithms; learn about creating Recommendation Engines; use multi-class classification and deep learning; and more. You will explore, in depth, topics such as data mining, classification, clustering, regression, predictive modeling, anomaly detection, boosted trees with XGBOOST, and more. More than just knowing the outcome, you’ll understand how these concepts work and what they do. With a slow learning curve on topics such as neural networks, you will explore deep learning, and more. By the end of this book, you will be able to perform machine learning with R in the cloud using AWS in various scenarios with different datasets.
Title Page
Credits
Packt Upsell
Customer Feedback
Preface
Free Chapter
A Process for Success
Linear Regression - The Blocking and Tackling of Machine Learning
Logistic Regression and Discriminant Analysis
Advanced Feature Selection in Linear Models
More Classification Techniques - K-Nearest Neighbors and Support Vector Machines
Classification and Regression Trees
Neural Networks and Deep Learning
Cluster Analysis
Principal Components Analysis
Market Basket Analysis, Recommendation Engines, and Sequential Analysis
Creating Ensembles and Multiclass Classification
Time Series and Causality
Text Mining
R on the Cloud
R Fundamentals
Sources

## Univariate linear regression

We begin by looking at a simple way to predict a quantitative response, Y, with one predictor variable, x, assuming that Y has a linear relationship with x. The model for this can be written as, Y = B0 + B1x + e. We can state it as the expected value of Y being a function of the parameters B0 (the intercept) plus B1 (the slope) times x, plus an error term e. The least squares approach chooses the model parameters that minimize the Residual Sum of Squares (RSS) of the predicted y values versus the actual Y values. For a simple example, let's say we have the actual values of Y1 and Y2 equal to 10 and 20 respectively, along with the predictions of y1 and y2 as 12 and 18. To calculate RSS, we add the squared differences RSS = (Y1 - y1)2 + (Y2 - y2)2, which, with simple substitution, yields (10 - 12)2 + (20 - 18)2 = 8.

I once remarked to a peer during our Lean Six Sigma Black Belt training that it's all about the sum of squares; understand the sum of squares and the...