Book Image

Codeless Time Series Analysis with KNIME

By : KNIME AG, Corey Weisinger, Maarit Widmann, Daniele Tonini
Book Image

Codeless Time Series Analysis with KNIME

By: KNIME AG, Corey Weisinger, Maarit Widmann, Daniele Tonini

Overview of this book

This book will take you on a practical journey, teaching you how to implement solutions for many use cases involving time series analysis techniques. This learning journey is organized in a crescendo of difficulty, starting from the easiest yet effective techniques applied to weather forecasting, then introducing ARIMA and its variations, moving on to machine learning for audio signal classification, training deep learning architectures to predict glucose levels and electrical energy demand, and ending with an approach to anomaly detection in IoT. There’s no time series analysis book without a solution for stock price predictions and you’ll find this use case at the end of the book, together with a few more demand prediction use cases that rely on the integration of KNIME Analytics Platform and other external tools. By the end of this time series book, you’ll have learned about popular time series analysis techniques and algorithms, KNIME Analytics Platform, its time series extension, and how to apply both to common use cases.
Table of Contents (20 chapters)
1
Part 1: Time Series Basics and KNIME Analytics Platform
7
Part 2: Building and Deploying a Forecasting Model
14
Part 3: Forecasting on Mixed Platforms

Training a random forest model on Spark

In this section, we will explore and preprocess the historical taxi trip data and train and evaluate a random forest model for taxi demand prediction on Spark. We will introduce these steps in the following subsections:

  1. Exploring the seasonalities via line plots and auto-correlation plots
  2. Preprocessing the data
  3. Training and testing the Spark random forest model

The steps in the application are also depicted in the training workflow in Figure 12.6 (accessible on the KNIME Hub under https://kni.me/w/13wY0Bz-2wUAxffc):

Figure 12.6 – The workflow training a Spark random forest model for demand prediction

The first part of the workflow loads the Parquet files onto Spark as introduced in the Accessing the data and loading it into Spark subsection. The downstream parts of the workflow – data exploration, preprocessing, model training and testing, and model evaluation – are introduced...