Book Image

Practical Time Series Analysis

By : Avishek Pal, PKS Prakash
Book Image

Practical Time Series Analysis

By: Avishek Pal, PKS Prakash

Overview of this book

Time Series Analysis allows us to analyze data which is generated over a period of time and has sequential interdependencies between the observations. This book describes special mathematical tricks and techniques which are geared towards exploring the internal structures of time series data and generating powerful descriptive and predictive insights. Also, the book is full of real-life examples of time series and their analyses using cutting-edge solutions developed in Python. The book starts with descriptive analysis to create insightful visualizations of internal structures such as trend, seasonality, and autocorrelation. Next, the statistical methods of dealing with autocorrelation and non-stationary time series are described. This is followed by exponential smoothing to produce meaningful insights from noisy time series data. At this point, we shift focus towards predictive analysis and introduce autoregressive models such as ARMA and ARIMA for time series forecasting. Later, powerful deep learning methods are presented, to develop accurate forecasting models for complex time series, and under the availability of little domain knowledge. All the topics are illustrated with real-life problem scenarios and their solutions by best-practice implementations in Python. The book concludes with the Appendix, with a brief discussion of programming and solving data science problems using Python.
Table of Contents (13 chapters)

Multi-layer perceptrons


Multi-layer perceptrons (MLP) are the most basic forms of neural networks. An MLP consists of three components: an input layer, a bunch of hidden layers, and an output layer. An input layer represents a vector of regressors or input features, for example, observations from preceding p points in time [xt-1,xt-2, ... ,xt-p]. The input features are fed to a hidden layer that has n neurons, each of which applies a linear transformation and a nonlinear activation to the input features. The output of a neuron is gi =  h(wix + bi), where wand bi are the weights and bias of the linear transformation and h is a nonlinear activation function. The nonlinear activation function enables the neural network to model complex non-linearities of the underlying relations between the regressors and the target variable. Popularly, h is the sigmoid function,

, that squashes any real number to the interval [0,1]. Due to this property, the sigmoid function is used to generate binary class...