Book Image

Machine Learning for Finance

By : Jannes Klaas
Book Image

Machine Learning for Finance

By: Jannes Klaas

Overview of this book

Machine Learning for Finance explores new advances in machine learning and shows how they can be applied across the financial sector, including insurance, transactions, and lending. This book explains the concepts and algorithms behind the main machine learning techniques and provides example Python code for implementing the models yourself. The book is based on Jannes Klaas’ experience of running machine learning training courses for financial professionals. Rather than providing ready-made financial algorithms, the book focuses on advanced machine learning concepts and ideas that can be applied in a wide variety of ways. The book systematically explains how machine learning works on structured data, text, images, and time series. You'll cover generative adversarial learning, reinforcement learning, debugging, and launching machine learning products. Later chapters will discuss how to fight bias in machine learning. The book ends with an exploration of Bayesian inference and probabilistic programming.
Table of Contents (15 chapters)
Machine Learning for Finance
Contributors
Preface
Other Books You May Enjoy
Index

Understanding autoencoders


Technically, autoencoders are not generative models since they cannot create completely new kinds of data. Yet, variational autoencoders, a minor tweak to vanilla autoencoders, can. So, it makes sense to first understand autoencoders by themselves, before adding the generative element.

Autoencoders by themselves have some interesting properties that can be exploited for applications such as detecting credit card fraud, which is useful in our focus on finance.

Given an input, x, an autoencoder learns how to output x. It aims to find a function, f, so that the following is true:

This might sound trivial at first, but the trick here is that autoencoders have a bottleneck. The middle hidden layer's size is smaller than the size of the input, x. Therefore, the model has to learn a compressed representation that captures all of the important elements of x in a smaller vector.

This can best be shown in the following diagram, where we can see a compressed representation of...