Book Image

AWS Certified DevOps Engineer - Professional Certification and Beyond

By : Adam Book
Book Image

AWS Certified DevOps Engineer - Professional Certification and Beyond

By: Adam Book

Overview of this book

The AWS Certified DevOps Engineer certification is one of the highest AWS credentials, vastly recognized in cloud computing or software development industries. This book is an extensive guide to helping you strengthen your DevOps skills as you work with your AWS workloads on a day-to-day basis. You'll begin by learning how to create and deploy a workload using the AWS code suite of tools, and then move on to adding monitoring and fault tolerance to your workload. You'll explore enterprise scenarios that'll help you to understand various AWS tools and services. This book is packed with detailed explanations of essential concepts to help you get to grips with the domains needed to pass the DevOps professional exam. As you advance, you'll delve into AWS with the help of hands-on examples and practice questions to gain a holistic understanding of the services covered in the AWS DevOps professional exam. Throughout the book, you'll find real-world scenarios that you can easily incorporate in your daily activities when working with AWS, making you a valuable asset for any organization. By the end of this AWS certification book, you'll have gained the knowledge needed to pass the AWS Certified DevOps Engineer exam, and be able to implement different techniques for delivering each service in real-world scenarios.
Table of Contents (31 chapters)
1
Section 1: Establishing the Fundamentals
7
Section 2: Developing, Deploying, and Using Infrastructure as Code
16
Section 3: Monitoring and Logging Your Environment and Workloads
21
Section 4: Enabling Highly Available Workloads, Fault Tolerance, and Implementing Standards and Policies
27
Section 5: Exam Tips and Tricks

Performance efficiency

If you and your architectural design team are coming from a data center infrastructure and a provisioning process takes weeks or months to get the system you need, then the quickness and availability of cloud resources is certainly a breath of fresh air. There is a need to understand how to select either the correct instance type or compute option (that is, server-based, containerized, or function-based compute) based on the workload requirements.

Once you have made an initial selection, a benchmarking process should be undertaken so that you can see if you are utilizing all the CPU and memory resources that you have allocated, as well as to confirm that the workload can handle the duty that it is required to handle. As you select your instance types, don't forget to factor in costs and make a note of the cost differences that could either save you money or cost you more as you perform your baseline testing.

AWS provides native tools to create, deploy, and monitor benchmark tests, as shown in the following diagram:

Figure 1.5 – Baseline testing with AWS tooling

Figure 1.5 – Baseline testing with AWS tooling

Using the tools provided by AWS, you can quickly spin up an environment for right-sizing, benchmarking, and load testing the initial value that you chose for your compute instance. You can also easily swap out other instance types to see how performant they are with the same test. Using CloudFormation to build the infrastructure, you can, in a quick and repeated fashion, run the tests using CodeBuild, all while gathering the metrics with CloudWatch to compare the results to make sure that you have made the best decision – with data to back up that decision. We will go into much more detail on how to use CodeBuild in Chapter 7, Using CloudFormation Templates to Deploy Workloads.

The performance efficiency pillar includes five design principles to help you maintain efficient workloads in the cloud:

  • Making advanced technologies easier for your team to implement
  • Being able to go global in minutes
  • Using serverless architectures
  • Allowing your teams to experiment
  • Using technology that aligns with your goals

Making advanced technologies easier for your team to implement

Having the ability to use advanced technologies has become simplified in the cloud with the advent of managed services. No longer do you need full-time DBAs on staff who specialize in each different flavor of database, to test whether Postgres or MariaDB will perform in a more optimal fashion. In the same way, if you need replication for that database, you simply check a box, and you instantly have a Highly Available setup.

Time that would otherwise be spent pouring over documentation, trying to figure out how to install and configure particular systems, is now spent on the things that matter the most to your customers and your business.

Being able to go global in minutes

Depending on the application or service you are running, your customers may be centralized into one regional area, or they may be spread out globally. Once you have converted your infrastructure into code, there are built-in capabilities, either through constructs in CloudFormation templates or the CDK, that allow you to use regional parameters to quickly reuse a previously built pattern or architecture and deploy it to a new region of the world.

Even without deploying your full set of applications and architecture, there are still capabilities that allow you to serve a global audience using the Content Delivery Network (CDN) known as CloudFront. Here, you can create a secure global presence using the application or deploy content in the primary region, which is the origin.

Using serverless architectures

First and foremost, moving to serverless architectures means servers are off your to-do list. This means no more configuring servers with packages at startup, no more right-sizing servers, and no more patching servers.

Serverless architectures also mean that you have decoupled your application. Whether you are using functions, events, or microservices, each of these should be doing a specific task. And with each component doing only their distinct task, it allows you to fine-tune memory and utilize CPU at the task level, as well as scale out at a particular task level.

This is not the best option for every workload, but don't allow a workload to be disqualified just because it would need a little refactoring. When an application can be moved to a serverless architecture, it can make life easier, the application itself more efficient, and there are usually cost savings to reap as a result – especially in the long run.

Allowing your teams to experiment

Once you move to the cloud, you can quickly and constantly refactor your workload to improve it for both performance and cost. If you have built your Infrastructure as Code, creating a new temporary environment just for testing can be a quick and cost-efficient way to try new modular pieces of your application, without having to worry about disrupting any customers or other parts of the organization.

Many of the experiments may not work, but that is the nature of experimentation. Business is extremely competitive in this day and age, and finding an item that does work and makes your service faster, cheaper, and better can be a real game changer.

Using technology that aligns with your workload's goals

List your business goals and let the product owner help drive some of the product and service selections based on those goals. If a development team has previous familiarity with certain technologies, they may be inclined to sway toward those technologies that they already feel confident using.

On the other hand, there are other teams that strive to use the latest and greatest technologies – but not necessarily because the technology solves a problem that has been identified. Rather, they are interested in constantly resume-building and making sure that they have both exposure to and experience with cutting-edge services as soon as they become available.