Book Image

Python Deep Learning Cookbook

By : Indra den Bakker
Book Image

Python Deep Learning Cookbook

By: Indra den Bakker

Overview of this book

Deep Learning is revolutionizing a wide range of industries. For many applications, deep learning has proven to outperform humans by making faster and more accurate predictions. This book provides a top-down and bottom-up approach to demonstrate deep learning solutions to real-world problems in different areas. These applications include Computer Vision, Natural Language Processing, Time Series, and Robotics. The Python Deep Learning Cookbook presents technical solutions to the issues presented, along with a detailed explanation of the solutions. Furthermore, a discussion on corresponding pros and cons of implementing the proposed solution using one of the popular frameworks like TensorFlow, PyTorch, Keras and CNTK is provided. The book includes recipes that are related to the basic concepts of neural networks. All techniques s, as well as classical networks topologies. The main purpose of this book is to provide Python programmers a detailed list of recipes to apply deep learning to common and not-so-common scenarios.
Table of Contents (21 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Launching an instance on Amazon Web Services (AWS)


Amazon Web Services (AWS) is the most cloud solution. If you don't have access to a local GPU or if you prefer to use a server, you can set up an EC2 on AWS. In this recipe, we provide steps to launch a GPU-enabled server.

Getting ready

Before we move on with this recipe, we assume that you already have an account on Amazon AWS and that you are familiar with its platform and the accompanying costs.

How to do it...

  1. Make sure the region you want to work in gives access to P2 or G3 instances. These instances include NVIDIA K80 GPUs and NVIDIA Tesla M60 GPUs, respectively. The K80 GPU is faster and has more GPU memory than the M60 GPU: 12 GB versus 8 GB. 

Note

While the NVIDIA K80 and M60 GPUs are powerful GPUs for running deep learning models, these should not be considered state-of-the-art. Other faster GPUs have already been launched by NVIDIA and it takes some time before these are added to cloud solutions. However, a big advantage of these cloud machines is that it is straightforward to scale the number of GPUs attached to a machine; for example, Amazon's p2.16xlarge instance has 16 GPUs.

  1. There are two when launching an AWS instance. Option 1: You build everything from scratch. Option 2: You use a preconfigured Amazon Machine Image (AMI) from the  marketplace. If you choose option 2, you will have to pay costs. For an example, see this AMI at https://aws.amazon.com/marketplace/pp/B06VSPXKDX.
  2. Amazon provides a and up-to-date overview of steps to launch the deep learning AMI at https://aws.amazon.com/blogs/ai/get-started-with-deep-learning-using-the-aws-deep-learning-ami/.
  3. If you want to build the server from scratch, launch a P2 or G3 instance and follow the steps under the Installing CUDA and cuDNN and Installing Anaconda and Libraries recipes.
  4. Always make sure you stop the running instances when you're done to prevent unnecessary costs. 

Note

A good option to save costs is to use AWS Spot instances. This allows you to bid on spare Amazon EC2 computing capacity.