Book Image

Codeless Time Series Analysis with KNIME

By : KNIME AG, Corey Weisinger, Maarit Widmann, Daniele Tonini
Book Image

Codeless Time Series Analysis with KNIME

By: KNIME AG, Corey Weisinger, Maarit Widmann, Daniele Tonini

Overview of this book

This book will take you on a practical journey, teaching you how to implement solutions for many use cases involving time series analysis techniques. This learning journey is organized in a crescendo of difficulty, starting from the easiest yet effective techniques applied to weather forecasting, then introducing ARIMA and its variations, moving on to machine learning for audio signal classification, training deep learning architectures to predict glucose levels and electrical energy demand, and ending with an approach to anomaly detection in IoT. There’s no time series analysis book without a solution for stock price predictions and you’ll find this use case at the end of the book, together with a few more demand prediction use cases that rely on the integration of KNIME Analytics Platform and other external tools. By the end of this time series book, you’ll have learned about popular time series analysis techniques and algorithms, KNIME Analytics Platform, its time series extension, and how to apply both to common use cases.
Table of Contents (20 chapters)
1
Part 1: Time Series Basics and KNIME Analytics Platform
7
Part 2: Building and Deploying a Forecasting Model
14
Part 3: Forecasting on Mixed Platforms

Training and deployment

Training and deployment in KNIME Analytics Platform come in a variety of complexities; for this chapter, we’ll look at some of the simpler options. No matter how you plan to train your model, it’s good to first partition your data. To partition data in KNIME, we use the Partitioning node:

Figure 6.10 – The Partitioning node and configuration dialog

In Figure 6.10, we see the Partitioning node and its configuration dialog. You’ll notice the node has one input port to the left and two output ports to the right. The input port is our full dataset, and the two outputs are the two splits based on our configuration choices. Note that the top port aligns with the options in the configuration dialog.

There are two things to configure for this node: the size of the first partition and how it is created. A 70/30 split with 70% being for the training set is common practice, but this can vary by use case. For...