Book Image

Heroku Cookbook

By : Mike Coutermarsh
Book Image

Heroku Cookbook

By: Mike Coutermarsh

Overview of this book

Heroku is a Platform as a Service that enables developers to rapidly deploy and scale their web applications. Heroku is designed for developer happiness, freeing developers from doing system administrative tasks such as configuring servers and setting up load balancers. Developers are able to focus on what they do best, building web applications, while leaving the details of deployment and scaling to the experts at Heroku. This practical guide is packed with step-by-step solutions to problems faced by every production-level web application hosted on Heroku. You'll quickly get comfortable with managing your Heroku applications from the command line and then learn everything you need to know to deploy and administer production-level web applications.
Table of Contents (17 chapters)
Heroku Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Introducing dynos, workers, and scaling


Heroku's killer feature has always been its ability to easily scale up and scale out our applications as our user base grows. This frees us from the pains of setting up and managing load balancers and additional servers on our own. In this recipe, we will be introduced to Heroku's dynos and workers as well as learn how to scale them both up and out as our applications grow.

Note

Scaling up and scaling out are two common terms used when growing web applications:

  • Scaling up (vertical scaling) means that we are making our servers more powerful by adding more CPU/RAM

  • Scaling out (horizontal scaling) means that we are adding more servers to our application

What's a dyno?

Dyno is the term Heroku uses for its web servers. A dyno is simply a virtual private server that runs our application and responds to web requests.

Note

Heroku provides us with one free 1X dyno per month. This is useful for testing and development.

What's a worker?

Heroku has an additional class of servers known as workers. These are identical to dynos, with the exception that they do not serve web requests.

Process sizes

Both dynos and workers are available in three different sizes: 1X, 2X, and PX. The default size is 1X; this is a small virtual server with 512 MB of RAM. These are large enough to run most web applications. However, if we find that our application is constrained by the limited memory or CPU size, we can scale up our dynos up to 2X, which provides 1024 MB of RAM and twice as much computing power.

Note

If our application has only a single 1X dyno running, it will shut down after an hour of inactivity. To avoid this, we need to have at least two dynos running or use a single 2X dyno.

The largest process size is the PX or performance dyno. These are dedicated virtual servers that do not share resources with any other Heroku customers. They have 6 GB of RAM and 40 times the compute resources of the standard 1X-sized dyno. Performance dynos should only be considered for applications that have high memory and CPU requirements.

Note

Heads up! Performance dynos are expensive, so don't accidently leave one running.

How to do it...

We'll use the Heroku CLI for this recipe. Let's open up a terminal and navigate to a directory with one of our existing Heroku applications and perform the following steps:

  1. To view our currently running processes, we can use the ps command. It will show the type, the size, and exactly what's running:

    $ heroku ps
    === web (1X): `bundle exec unicorn -p $PORT -c ./config/unicorn.rb`
    web.1: up 2014/03/15 19:41:27 (~ 8s ago)
    
  2. We currently have only one dyno running for this application. Let's scale it up to two; this will effectively double our application's capacity. Scaling processes are done with the ps:scale command:

    $ heroku ps:scale web=2
    Scaling dynos... done, now running web at 2:1X.
    
  3. The scale command is very flexible. If we want, we can scale both dynos and workers at the same time:

    $ heroku ps:scale web=2 worker=1
    Scaling dynos... done, now running worker at 1:1X, web at 2:1X.
    

    Note

    We can run these commands on any of our applications by including --app app_name at the end of the command.

  4. We can change the size of our dynos using ps:resize. Let's scale up our web dynos to 2X:

    $ heroku ps:resize web=2x
    Resizing and restarting the specified dynos... done
    web dynos now 2X ($0.10/dyno-hour)
    
  5. We can also scale and change the size in the same command. Let's dial our dynos back down to one and adjust the size to 1X:

    $ heroku ps:scale web=1:1x
    Scaling dynos... done, now running web at 1:1X.
    

    Note

    The name of the process we are scaling depends on what is in our application's Procfile. In these examples, our processes are named web and worker. Web processes are the only ones that Heroku will send web traffic to. We can name our other processes anything we like.

  6. To finish up, we can scale our workers back down to zero:

    $ heroku ps:scale worker=0
    Scaling dynos... done, now running worker at 0:1X.
    

How it works…

Now that we have learned how to scale our applications, let's go a little more in depth to learn about the different types of Heroku dynos.

Dynos

A dyno is simply a web server. When we create our application's Procfile, the web process that we define is what runs on our dynos. When a user visits our web application, their requests get sent to our dynos via Heroku's routing layer. The routing layer acts like a load balancer. It distributes our users' requests and monitors the health of our dynos. To handle more users, we can scale out our application by increasing the number of running dynos. This allows us to serve requests from more concurrent users. If we are currently running one dyno and adding another, we have theoretically doubled the amount of web requests that our application can respond do.

Workers

In our Procfile, any process other than web will run on a worker. Workers are used to process background tasks such as sending out e-mails or generating PDFs. Any task that a user should not have to wait for is a good candidate that will run on a worker. For a Rails application, any background job (such as Resque or Sidekiq) will need to be run on a worker dyno. Workers can be scaled in exactly the same way as dynos. If our application has a large backlog of tasks that need to be completed, we can add additional workers to increase the number of tasks we can complete simultaneously.

One-time dynos

When we use heroku run to execute a command on our application, Heroku spins up a new dyno specifically to run the command. It's called a one-time dyno. Once the command is complete, it will shut itself down.

See also

  • To learn more about scaling, take a look at Chapter 6, Load Testing a Heroku Application