A very simple predictive model takes the current value of a variable and extrapolates it to the next period. To extrapolate, we can use a simple mathematical function. Since a variety of functions can be approximated by polynomials as in the Taylor series, polynomials of low degree might do the trick. What this boils down to is regression of the previous values to the next values. The corresponding models are therefore called autoregressive.
We have to be careful about overfitting. Cross-validation is a common approach to split the data into train and test sets. We fit the data using the train set and test the fit with the test set. This should reduce bias (see the autoregressive.py
file in this book's code bundle):
from __future__ import print_function import numpy as np import matplotlib.pyplot as plt data = np.load('cbk12.npy') # Load average pressure meanp = .1 * data[:,1] # Split point for test and train data cutoff = 0.9 * len(meanp...