Book Image

Codeless Deep Learning with KNIME

By : Kathrin Melcher, KNIME AG, Rosaria Silipo
Book Image

Codeless Deep Learning with KNIME

By: Kathrin Melcher, KNIME AG, Rosaria Silipo

Overview of this book

KNIME Analytics Platform is an open source software used to create and design data science workflows. This book is a comprehensive guide to the KNIME GUI and KNIME deep learning integration, helping you build neural network models without writing any code. It’ll guide you in building simple and complex neural networks through practical and creative solutions for solving real-world data problems. Starting with an introduction to KNIME Analytics Platform, you’ll get an overview of simple feed-forward networks for solving simple classification problems on relatively small datasets. You’ll then move on to build, train, test, and deploy more complex networks, such as autoencoders, recurrent neural networks (RNNs), long short-term memory (LSTM), and convolutional neural networks (CNNs). In each chapter, depending on the network and use case, you’ll learn how to prepare data, encode incoming data, and apply best practices. By the end of this book, you’ll have learned how to design a variety of different neural architectures and will be able to train, test, and deploy the final network.
Table of Contents (16 chapters)
1
Section 1: Feedforward Neural Networks and KNIME Deep Learning Extension
6
Section 2: Deep Learning Networks
12
Section 3: Deployment and Productionizing

Chapter 6: Recurrent Neural Networks for Demand Prediction

We have gathered some experience, by now, with fully connected feedforward neural networks in two variants: implementing a classification task by assigning an input sample to a class in a set of predefined classes or trying to reproduce the shape of an input vector via an autoencoder architecture. In both cases, the output response depends only on the values of the current input vector. At time , the output response, , depends on, and only on, the input vector, , at time . The network has no memory of what came before and produces only based on input .

With Recurrent Neural Networks (RNNs), we introduce the time component . We are going to discover networks where the output response, , at time depends on the current input sample, , as well as on previous input samples, , , … , where the memory of the network of the past samples depends on the network architecture.

We will first introduce the general concept...