Book Image

Practical Time Series Analysis

By : Avishek Pal, PKS Prakash
Book Image

Practical Time Series Analysis

By: Avishek Pal, PKS Prakash

Overview of this book

Time Series Analysis allows us to analyze data which is generated over a period of time and has sequential interdependencies between the observations. This book describes special mathematical tricks and techniques which are geared towards exploring the internal structures of time series data and generating powerful descriptive and predictive insights. Also, the book is full of real-life examples of time series and their analyses using cutting-edge solutions developed in Python. The book starts with descriptive analysis to create insightful visualizations of internal structures such as trend, seasonality, and autocorrelation. Next, the statistical methods of dealing with autocorrelation and non-stationary time series are described. This is followed by exponential smoothing to produce meaningful insights from noisy time series data. At this point, we shift focus towards predictive analysis and introduce autoregressive models such as ARMA and ARIMA for time series forecasting. Later, powerful deep learning methods are presented, to develop accurate forecasting models for complex time series, and under the availability of little domain knowledge. All the topics are illustrated with real-life problem scenarios and their solutions by best-practice implementations in Python. The book concludes with the Appendix, with a brief discussion of programming and solving data science problems using Python.
Table of Contents (13 chapters)

Convolutional neural networks


This section describes Convolutional Neural Networks (CNNs) that are primarily applied to develop supervised and unsupervised models when the input data are images. In general, two-dimensional (2D) convolutions are applied to images but one-dimensional (1D) convolutions can be used on a sequential input to capture time dependencies. This approach is explored in this section to develop time series forecasting models.

2D convolutions

Let's start by describing the 2D CNNs and we will derive 1D CNNs as a special case. CNNs take advantage of the 2D structure of images. Images have a rectangular dimension of w, where n is the height and h x w x n is the width of the image. The color value of every pixel would be an input feature to the model. Using a fully-connected dense layer having 28 x 28 neurons, the number of trainable weights would be 28 x 28 x 100 = 78400. For images of handwritten digits 32 x 32 from the MNIST dataset, the number of trainable weights in the...