Book Image

Machine Learning Automation with TPOT

By : Dario Radečić
Book Image

Machine Learning Automation with TPOT

By: Dario Radečić

Overview of this book

The automation of machine learning tasks allows developers more time to focus on the usability and reactivity of the software powered by machine learning models. TPOT is a Python automated machine learning tool used for optimizing machine learning pipelines using genetic programming. Automating machine learning with TPOT enables individuals and companies to develop production-ready machine learning models cheaper and faster than with traditional methods. With this practical guide to AutoML, developers working with Python on machine learning tasks will be able to put their knowledge to work and become productive quickly. You'll adopt a hands-on approach to learning the implementation of AutoML and associated methodologies. Complete with step-by-step explanations of essential concepts, practical examples, and self-assessment questions, this book will show you how to build automated classification and regression models and compare their performance to custom-built models. As you advance, you'll also develop state-of-the-art models using only a couple of lines of code and see how those models outperform all of your previous models on the same datasets. By the end of this book, you'll have gained the confidence to implement AutoML techniques in your organization on a production level.
Table of Contents (14 chapters)
1
Section 1: Introducing Machine Learning and the Idea of Automation
3
Section 2: TPOT – Practical Classification and Regression
8
Section 3: Advanced Examples and Neural Networks in TPOT

Deploying machine learning models to localhost

We'll have to train a model before we can deploy it. You already know everything about training with TPOT, so we won't spend too much time here. The goal is to train a simple Iris classifier and export the predictive functionality somehow. Let's go through the process step by step:

  1. As always, the first step is to load in the libraries and the dataset. You can use the following piece of code to do so:
    import pandas as pd
    df = pd.read_csv('data/iris.csv')
    df.head()

    This is what the first few rows look like:

    Figure 8.10 – The first few rows of the Iris dataset

  2. The next step is to separate the features from the target variable. This time, we won't split the dataset into training and testing subsets, as we don't intend to evaluate the model's performance. In other words, we know the model performs well, and now we want to retrain it on the entire dataset. Also, string values in the target...