Book Image

Codeless Deep Learning with KNIME

By : Kathrin Melcher, KNIME AG, Rosaria Silipo
Book Image

Codeless Deep Learning with KNIME

By: Kathrin Melcher, KNIME AG, Rosaria Silipo

Overview of this book

KNIME Analytics Platform is an open source software used to create and design data science workflows. This book is a comprehensive guide to the KNIME GUI and KNIME deep learning integration, helping you build neural network models without writing any code. It’ll guide you in building simple and complex neural networks through practical and creative solutions for solving real-world data problems. Starting with an introduction to KNIME Analytics Platform, you’ll get an overview of simple feed-forward networks for solving simple classification problems on relatively small datasets. You’ll then move on to build, train, test, and deploy more complex networks, such as autoencoders, recurrent neural networks (RNNs), long short-term memory (LSTM), and convolutional neural networks (CNNs). In each chapter, depending on the network and use case, you’ll learn how to prepare data, encode incoming data, and apply best practices. By the end of this book, you’ll have learned how to design a variety of different neural architectures and will be able to train, test, and deploy the final network.
Table of Contents (16 chapters)
1
Section 1: Feedforward Neural Networks and KNIME Deep Learning Extension
6
Section 2: Deep Learning Networks
12
Section 3: Deployment and Productionizing

Building and Training the Encoder-Decoder Architecture

Now that the three sequences are available, we can start defining the network structure within a workflow. In this section, you will learn how to define and train an encoder-decoder structure in KNIME Analytics Platform. Once the network is trained, you will learn how the encoder and decoder can be extracted into two networks. In the last section, we will discuss how the extracted networks can be used in a deployment workflow to translate English sentences into German.

Defining the Network Structure

In the encoder-decoder architecture, we want to have both the encoder and the decoder as LSTM networks. The encoder and the decoder have different input sequences. The English one-hot-encoded sentences are the input for the encoder and the German one-hot-encoded sentences are the input for the decoder. This means two input layers are needed: one for the encoder and one for the decoder.

The encoder network is made up of two...