Book Image

Caffe2 Quick Start Guide

By : Ashwin Nanjappa
Book Image

Caffe2 Quick Start Guide

By: Ashwin Nanjappa

Overview of this book

Caffe2 is a popular deep learning library used for fast and scalable training, and inference of deep learning models on different platforms. This book introduces you to the Caffe2 framework and demonstrates how you can leverage its power to build, train, and deploy efficient neural network models at scale. The Caffe 2 Quick Start Guide will help you in installing Caffe2, composing networks using its operators, training models, and deploying models to different architectures. The book will also guide you on how to import models from Caffe and other frameworks using the ONNX interchange format. You will then cover deep learning accelerators such as CPU and GPU and learn how to deploy Caffe2 models for inference on accelerators using inference engines. Finally, you'll understand how to deploy Caffe2 to a diverse set of hardware, using containers on the cloud and resource-constrained hardware such as Raspberry Pi. By the end of this book, you will not only be able to compose and train popular neural network models with Caffe2, but also deploy them on accelerators, to the cloud and on resource-constrained platforms such as mobile and embedded hardware.
Table of Contents (9 chapters)

The relationship between Caffe and Caffe2

At the NIPS academic conference held in 2012, Alex Krizhevsky and his collaborators, one of whom was the neural network pioneer, Geoffrey Hinton, presented a record breaking result at the ImageNet Large-Scale Visual Recognition Competition (ILSVRC). Research teams competed in various image recognition tasks that used the ImageNet dataset. Krizhevsky's results on the image classification task were 10.8% better than the state of the art. He had used GPUs for the first time to train a CNN with many layers. This network structure would popularly be called AlexNet later. The design of such a deep neural network with a large number of layers is the reason why this field came to be called deep learning. Krizhevsky shared the entire source code of his network, now called cuda-convnet, along with its highly GPU-optimized training code.

Soon...