Book Image

Caffe2 Quick Start Guide

By : Ashwin Nanjappa
Book Image

Caffe2 Quick Start Guide

By: Ashwin Nanjappa

Overview of this book

Caffe2 is a popular deep learning library used for fast and scalable training, and inference of deep learning models on different platforms. This book introduces you to the Caffe2 framework and demonstrates how you can leverage its power to build, train, and deploy efficient neural network models at scale. The Caffe 2 Quick Start Guide will help you in installing Caffe2, composing networks using its operators, training models, and deploying models to different architectures. The book will also guide you on how to import models from Caffe and other frameworks using the ONNX interchange format. You will then cover deep learning accelerators such as CPU and GPU and learn how to deploy Caffe2 models for inference on accelerators using inference engines. Finally, you'll understand how to deploy Caffe2 to a diverse set of hardware, using containers on the cloud and resource-constrained hardware such as Raspberry Pi. By the end of this book, you will not only be able to compose and train popular neural network models with Caffe2, but also deploy them on accelerators, to the cloud and on resource-constrained platforms such as mobile and embedded hardware.
Table of Contents (9 chapters)

Intel OpenVINO

OpenVINO consists of libraries and tools created by Intel that enable you to optimize your trained DL model from any framework and then deploy it using an inference engine on Intel hardware. Supported hardware includes Intel CPUs, integrated graphics in Intel CPUs, Intel's Movidius Neural Compute Stick, and FPGAs. OpenVINO is available for free from Intel.

OpenVINO includes the following components:

  • Model optimizer: A tool that imports trained DL models from other DL frameworks, converts them, and then optimizes them. Supported DL frameworks include Caffe, TensorFlow, MXNet, and ONNX. Note the absence of support for Caffe2 or PyTorch.
  • Inference engine: These are libraries that load the optimized model produced by the model optimizer and provide your applications with the ability to run the model on Intel hardware.
  • Demos and samples: These simple applications...