Book Image

Deep Learning with fastai Cookbook

By : Mark Ryan
Book Image

Deep Learning with fastai Cookbook

By: Mark Ryan

Overview of this book

fastai is an easy-to-use deep learning framework built on top of PyTorch that lets you rapidly create complete deep learning solutions with as few as 10 lines of code. Both predominant low-level deep learning frameworks, TensorFlow and PyTorch, require a lot of code, even for straightforward applications. In contrast, fastai handles the messy details for you and lets you focus on applying deep learning to actually solve problems. The book begins by summarizing the value of fastai and showing you how to create a simple 'hello world' deep learning application with fastai. You'll then learn how to use fastai for all four application areas that the framework explicitly supports: tabular data, text data (NLP), recommender systems, and vision data. As you advance, you'll work through a series of practical examples that illustrate how to create real-world applications of each type. Next, you'll learn how to deploy fastai models, including creating a simple web application that predicts what object is depicted in an image. The book wraps up with an overview of the advanced features of fastai. By the end of this fastai book, you'll be able to create your own deep learning applications using fastai. You'll also have learned how to use fastai to prepare raw datasets, explore datasets, train deep learning models, and deploy trained models.
Table of Contents (10 chapters)

Training a deep learning classification model with a curated text dataset

In the previous section, we trained a language model using the curated text IMDb dataset. The model in the previous section predicted the next set of words that would follow a given set of words. In this section, we will take the language model that was fine-tuned on the IMDb dataset and use it to train a text classification model that classifies text samples that are specific to the movie review use case.

Getting ready

This recipe makes use of the encoder that you trained in the previous section, so ensure that you have followed the steps in the recipe in that section, in particular, that you have saved the encoder from the trained language model.

As mentioned in the previous section, you need to take some additional steps before you can run recipes in Gradient that use the language model pre-trained on the Wikipedia corpus. To ensure that you have access to the pre-trained language model that you...