-
Book Overview & Buying
-
Table Of Contents
Distributed Machine Learning with Python
By :
Giant NLP models, such as BERT, are often hard to train using a single GPU (that is, single-node). The main reason is due to the limited on-device memory size.
Here, we will first fine-tune the BERT model using a single GPU. The dataset we will use is SQuAD 2.0. It often throws an Out-of-Memory (OOM) error due to the giant model size and huge intermediate results size.
Second, we will use a state-of-the-art GPU and try our best to pack the relatively small BERT-base model inside a single GPU. Then, we will carefully adjust the batch size to avoid an OOM error.
The first thing we need to do is to install the transformers library on our machine. Here, we use the transformers library provided by Hugging Face. The following command is how we install it on an Ubuntu machine using PyTorch:
$ pip install transformers
Please make sure you are installing the correct transformers version (...