#### Overview of this book

Most of us have heard about the term Machine Learning, but surprisingly the question frequently asked by developers across the globe is, “How do I get started in Machine Learning?”. One reason could be attributed to the vastness of the subject area because people often get overwhelmed by the abstractness of ML and terms such as regression, supervised learning, probability density function, and so on. This book is a systematic guide teaching you how to implement various Machine Learning techniques and their day-to-day application and development. You will start with the very basics of data and mathematical models in easy-to-follow language that you are familiar with; you will feel at home while implementing the examples. The book will introduce you to various libraries and frameworks used in the world of Machine Learning, and then, without wasting any time, you will get to the point and implement Regression, Clustering, classification, Neural networks, and more with fun examples. As you get to grips with the techniques, you’ll learn to implement those concepts to solve real-world scenarios for ML applications such as image analysis, Natural Language processing, and anomaly detections of time series data. By the end of the book, you will have learned various ML techniques to develop more efficient and intelligent applications.
Preface
Free Chapter
Introduction - Machine Learning and Statistical Science
The Learning Process
Clustering
Linear and Logistic Regression
Neural Networks
Convolutional Neural Networks
Recurrent Neural Networks
Recent Models and Developments
Software Installation and Configuration

# Linear regression

So, it's time to start with the simplest yet still very useful abstraction for our data–a linear regression function.

In linear regression, we try to find a linear equation that minimizes the distance between the data points and the modeled line. The model function takes the following form:

yi = ßxi +α+εi

Here, α is the intercept and ß is the slope of the modeled line. The variable x is normally called the independent variable, and y the dependent one, but it can also be called the regressor and the response variables.

The εi variable is a very interesting element, and it's the error or distance from the sample i to the regressed line.

Depiction of the components of a regression line, including the original elements, the estimated ones (in red), and the error (ε)

The set of all those distances, calculated...