Book Image

Machine Learning for Time-Series with Python

By : Ben Auffarth
Book Image

Machine Learning for Time-Series with Python

By: Ben Auffarth

Overview of this book

The Python time-series ecosystem is huge and often quite hard to get a good grasp on, especially for time-series since there are so many new libraries and new models. This book aims to deepen your understanding of time series by providing a comprehensive overview of popular Python time-series packages and help you build better predictive systems. Machine Learning for Time-Series with Python starts by re-introducing the basics of time series and then builds your understanding of traditional autoregressive models as well as modern non-parametric models. By observing practical examples and the theory behind them, you will become confident with loading time-series datasets from any source, deep learning models like recurrent neural networks and causal convolutional network models, and gradient boosting with feature engineering. This book will also guide you in matching the right model to the right problem by explaining the theory behind several useful models. You’ll also have a look at real-world case studies covering weather, traffic, biking, and stock market data. By the end of this book, you should feel at home with effectively analyzing and applying machine learning methods to time-series.
Table of Contents (15 chapters)
13
Other Books You May Enjoy
14
Index

K-nearest neighbors with dynamic time warping

K-nearest neighbors is a well-known machine learning method (sometimes also going under the guise of case-based reasoning). In kNN, we can use a distance measure to find similar data points. We can then take the known labels of these nearest neighbors as the output and integrate them in some way using a function.

Figure 7.3 illustrates the basic idea of kNN for classification (source – WikiMedia Commons: https://commons.wikimedia.org/wiki/File:KnnClassification.svg):

/Users/ben/Downloads/Machine-Learning for Time-Series with Python/knn.png

Figure 7.3: K-nearest neighbor for classification

We know a few data points already. In the preceding illustration, these points are indicated as squares and triangles, and they represent data points of two different classes, respectively. Given a new data point, indicated by a circle, we find the closest known data points to it. In this example, we find that the new point is similar to triangles, so we might assume that the new point is of the triangle...