Book Image

Codeless Deep Learning with KNIME

By : Kathrin Melcher, KNIME AG, Rosaria Silipo
Book Image

Codeless Deep Learning with KNIME

By: Kathrin Melcher, KNIME AG, Rosaria Silipo

Overview of this book

KNIME Analytics Platform is an open source software used to create and design data science workflows. This book is a comprehensive guide to the KNIME GUI and KNIME deep learning integration, helping you build neural network models without writing any code. It’ll guide you in building simple and complex neural networks through practical and creative solutions for solving real-world data problems. Starting with an introduction to KNIME Analytics Platform, you’ll get an overview of simple feed-forward networks for solving simple classification problems on relatively small datasets. You’ll then move on to build, train, test, and deploy more complex networks, such as autoencoders, recurrent neural networks (RNNs), long short-term memory (LSTM), and convolutional neural networks (CNNs). In each chapter, depending on the network and use case, you’ll learn how to prepare data, encode incoming data, and apply best practices. By the end of this book, you’ll have learned how to design a variety of different neural architectures and will be able to train, test, and deploy the final network.
Table of Contents (16 chapters)
1
Section 1: Feedforward Neural Networks and KNIME Deep Learning Extension
6
Section 2: Deep Learning Networks
12
Section 3: Deployment and Productionizing

Summary

In this chapter, we explored the topic of neural machine translation and trained a network to produce English-to-German translations.

We started with an introduction to automatic machine translation, covering its history from rule-based machine translation to neural machine translation. Next, we introduced the concept of encoder-decoder RNN-based architectures, which can be used for neural machine translation. In general, encoder-decoder architectures can be used for sequence-to-sequence prediction tasks or question-answer systems.

After that, we covered all the steps needed to train and apply a neural machine translation model at the character level, using a simple network structure with only one LSTM unit for both the encoder and decoder. The joint network, derived from the combination of the encoder and decoder, was trained using a teacher forcing paradigm.

At the end of the training phase and before deployment, a lambda layer was inserted in the decoder part to...