Book Image

Codeless Deep Learning with KNIME

By : Kathrin Melcher, KNIME AG, Rosaria Silipo
Book Image

Codeless Deep Learning with KNIME

By: Kathrin Melcher, KNIME AG, Rosaria Silipo

Overview of this book

KNIME Analytics Platform is an open source software used to create and design data science workflows. This book is a comprehensive guide to the KNIME GUI and KNIME deep learning integration, helping you build neural network models without writing any code. It’ll guide you in building simple and complex neural networks through practical and creative solutions for solving real-world data problems. Starting with an introduction to KNIME Analytics Platform, you’ll get an overview of simple feed-forward networks for solving simple classification problems on relatively small datasets. You’ll then move on to build, train, test, and deploy more complex networks, such as autoencoders, recurrent neural networks (RNNs), long short-term memory (LSTM), and convolutional neural networks (CNNs). In each chapter, depending on the network and use case, you’ll learn how to prepare data, encode incoming data, and apply best practices. By the end of this book, you’ll have learned how to design a variety of different neural architectures and will be able to train, test, and deploy the final network.
Table of Contents (16 chapters)
1
Section 1: Feedforward Neural Networks and KNIME Deep Learning Extension
6
Section 2: Deep Learning Networks
12
Section 3: Deployment and Productionizing

Building and Training the Autoencoder

Let's go into detail about the particular application we will build to tackle fraud detection with a neural autoencoder. Like all data science projects, it includes two separate applications: one to train and optimize the whole strategy on dedicated datasets, and one to set it in action to analyze real-world credit card transactions. The first application is implemented with the training workflow; the second application is implemented with the deployment workflow.

Tip

Often, training and deployment are separate applications since they work on different data and have different goals.

The training workflow uses a lab dataset to produce an acceptable model to implement the task, sometimes requiring a few different trials. The deployment workflow does not change the model or the strategy anymore; it just applies it to real-world transactions to get fraud alarms.

In this section, we will focus on the training phase, including the following...