Book Image

Machine Learning for Streaming Data with Python

By : Joos Korstanje
Book Image

Machine Learning for Streaming Data with Python

By: Joos Korstanje

Overview of this book

Streaming data is the new top technology to watch out for in the field of data science and machine learning. As business needs become more demanding, many use cases require real-time analysis as well as real-time machine learning. This book will help you to get up to speed with data analytics for streaming data and focus strongly on adapting machine learning and other analytics to the case of streaming data. You will first learn about the architecture for streaming and real-time machine learning. Next, you will look at the state-of-the-art frameworks for streaming data like River. Later chapters will focus on various industrial use cases for streaming data like Online Anomaly Detection and others. As you progress, you will discover various challenges and learn how to mitigate them. In addition to this, you will learn best practices that will help you use streaming data to generate real-time insights. By the end of this book, you will have gained the confidence you need to stream data in your machine learning models.
Table of Contents (17 chapters)
1
Part 1: Introduction and Core Concepts of Streaming Data
5
Part 2: Exploring Use Cases for Data Streaming
11
Part 3: Advanced Concepts and Best Practices around Streaming Data
15
Chapter 12: Conclusion and Best Practices

Catastrophic forgetting in online models

Although catastrophic forgetting was initially identified as a problem for neural networks, you can imagine that online machine learning has the same problem of continuous re-learning. The problem of catastrophic forgetting, or catastrophic inference, is therefore also present and needs to be mastered.

If models are updated at every new data point, it is expected that coefficients will change over time. Yet as modern-day machine learning algorithms are very complex and have huge numbers of coefficients or trees, it is a fairly difficult task to keep a close eye on them.

In an ideal world, the most beneficial goal would probably be to try and avoid any wrong learning in your machine learning at all. One way to do this is to keep a close eye on model performance and keep tight versioning systems in place to make sure that even if your model is wrongly learning anything, it does not get deployed in a production system. We will go into this...