Book Image

Effective DevOps with AWS - Second Edition

By : Yogesh Raheja, Giuseppe Borgese, Nathaniel Felsen
Book Image

Effective DevOps with AWS - Second Edition

By: Yogesh Raheja, Giuseppe Borgese, Nathaniel Felsen

Overview of this book

The DevOps movement has transformed the way modern tech companies work. Amazon Web Services (AWS), which has been at the forefront of the cloud computing revolution, has also been a key contributor to the DevOps movement, creating a huge range of managed services that help you implement DevOps principles. Effective DevOps with AWS, Second Edition will help you to understand how the most successful tech start-ups launch and scale their services on AWS, and will teach you how you can do the same. This book explains how to treat infrastructure as code, meaning you can bring resources online and offline as easily as you control your software. You will also build a continuous integration and continuous deployment pipeline to keep your app up to date. Once you have gotten to grips will all this, we'll move on to how to scale your applications to offer maximum performance to users even when traffic spikes, by using the latest technologies, such as containers. In addition to this, you'll get insights into monitoring and alerting, so you can make sure your users have the best experience when using your service. In the concluding chapters, we'll cover inbuilt AWS tools such as CodeDeploy and CloudFormation, which are used by many AWS administrators to perform DevOps. By the end of this book, you'll have learned how to ensure the security of your platform and data, using the latest and most prominent AWS tools.
Table of Contents (15 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Thinking in terms of the cloud, and not infrastructure


We will now describe a real incident that took place in a datacenter in late December, 2011, when dozens of alerts were received from our live monitoring system. This was a result of losing connectivity to the datacenter. In response to this, administrator rushed to the Network Operations Center (NOC), hoping that it was only a small glitch in the monitoring system. With so much redundancy, we may wonder how everything can gooffline. Unfortunately,the big monitoring screens in the NOC room were all red, which is not a good sign. This was the beginning of a very long nightmare.

As it happens, this was caused by an electrician who was working in the datacenter and mistakenly triggered the fire alarm. Within seconds of this occurring, the fire suppression system set off and released its aragonite on top of the server racks. Unfortunately, this kind of fire suppression system made so much noise when it released its gas that sound waves instantly killed hundreds of hard drives, effectively shutting down the data center facility. It took months to recover from this.

Deploying your own hardware versus in the cloud

It wasn't long ago that tech companies, small and large, had to have a proper technical operations team, able to build infrastructures. The process went a little bit like this:

  1. Fly to the location where you want to set up your infrastructure. Here, take a tour of different datacenters and their facilities. Observe the floor considerations, power considerations, Heating, Ventilation, and Air Conditioning (HVAC), fire prevention systems, physical security, and so on.
  2. Shop for an internet service provider. Ultimately, you are considering servers and a lot more bandwidth, but the process is the same—you want to acquire internet connectivity for your servers.
  3. Once this is done, it's time to buy your hardware. Make the right decisions here, because you will probably spend a big portion of your company's money on selecting and buying servers, switches, routers, firewalls, storage, UPS (for when you have a power outage), KVM, network cables, labeling (which is dear to every system administrator's heart), and a bunch of spare parts, hard drives, raid controllers, memory, power cables, and so on.
  1. At this point, once the hardware has been purchased and shipped to the data center location, you can rack everything, wire all the servers, and power everything on. Your network team can kick in and establish connectivity to the new datacenter using various links, configuring the edge routers, switches, top of the rack switches, KVM, and firewalls (sometimes). Your storage team is next, and will provide the much-neededNetwork Attached Storage(NAS) orStorage Area Network(SAN). Next comes your sysops team, which will image the servers, upgrade the BIOS (sometimes), configure the hardware raid, and finally, put an OS on the servers.

Not only is this a full-time job for a big team, but it also takes a lot of time and money to even get there. As you will see in this book, getting new servers up and running with AWS only takes us a few minutes. In fact, you will soon see how to deploy and run multiple services in a few minutes, and just when you need it, with the pay-what-you-use model.

Cost analysis

From the perspective of cost, deploying services and applications in a cloud infrastructure such as AWS usually ends up being a lot cheaper than buying your own hardware. If you want to deploy your own hardware, you have to pay for all of the hardware mentioned previously (servers, network equipment, storage, and so on) upfront as well as licensed software, in some cases. In a cloud environment,you pay as you go. You can add or remove servers in no time, and will only be charged for the duration in which the servers were running. Also, if you take advantage of PaaS and SaaS applications, you will usually end up saving even more money by lowering your operating costs, as you won't need as many administrators to administrate your servers, database, storage, and so on. Most cloud providers (AWS included) also offer tiered pricing and volume discounts. As your service grows, you will end up paying less for each unit of storage, bandwidth, and so on.

Just-in-time infrastructure

As you just saw, when deploying in the cloud, you only pay for the resources that you are provided with. Most cloud companies use this to their advantage, in order to scale their infrastructure up or down as the traffic to their site changes. This ability to add or remove new servers and services in no time and on demand is one of the main differentiators of an effective cloud infrastructure.

In the following example, you can see the amount of traffic at https://www.amazon.com/ during the month of November. Thanks to Black Friday and Cyber Monday, the traffic triples at the end of the month:

If the company were hosting their service in an old-fashioned way, they would need to have enough servers provisioned to handle this traffic, so that only 24% of their infrastructure would be used during the month, on average:

However, thanks to being able to scale dynamically, they can provide only what they really need, and then dynamically absorb the spikes in traffic that Black Friday and Cyber Monday trigger:

You can also see the benefits of having fast auto-scaling capabilities on a very regular basis, across multiple organizations using the cloud. This is again a real case study taken by the company medium, very often. Here, stories become viral, and the amount of traffic going on drastically changes. On January 21, 2015, the White House posted a transcript of the State of the Union minutes before President Obama began his speech: http://bit.ly/2sDvseP. As you can see in the following graph, thanks to being in the cloud and having auto-scaling capabilities, the platform was able to absorb five times the instant spike of traffic that the announcement made, by doubling the number of servers that the front service used. Later, as the traffic started to drain naturally, you automatically removed some hosts from your fleet:

The different layers of a cloud

Cloud computing is often broken down into three different types of services, generally called service models, as follows:

  • Infrastructure as a Service (IaaS): This is the fundamental building block, on top of which everything related to the cloud is built. IaaS is usually a computing resource in a virtualized environment. This offers a combination of processing power, memory, storage, and network. The most common IaaS entities that you will find are Virtual Machines (VMs) and network equipment, such as load balancers or virtual Ethernet interfaces, and storage, such as block devices. This layer is very close to the hardware, and offers the full flexibility that you would get when deploying your software outside of a cloud. If you have any experience with datacenters, it will also apply mostly  to this layer.
  • Platform as a Service (PaaS): This layer is where things start to get really interesting with the cloud. When building an application, you will likely need a certain number of common components, such as a data store and a queue. The PaaS layer provides a number of ready-to-use applications, to help you build your own services without worrying about administrating and operating third-party services, such as database servers.
  • Software as a Service (SaaS): This layer is the icing on the cake. Similar to the PaaS layer, you get access to managed services, but this time, these services are a complete solution dedicated to certain purposes, such as management or monitoring tools.

We would suggest that you go through the National Institute of Standard and Technology (NIST) Definition of Cloud Computing at https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-145.pdf and the NIST Cloud Computing Standards Roadmap at https://www.nist.gov/sites/default/files/documents/itl/cloud/NIST_SP-500-291_Version-2_2013_June18_FINAL.pdf. This book covers a fair amount of services of the PaaS and SaaS types. While building an application, relying on these services makes a big difference, in comparison to the more traditional environment outside of the cloud. Another key element to success when deploying or migrating to a new infrastructure is adopting a DevOps mindset.