Book Image

Optimizing Your Modernization Journey with AWS

By : Mridula Grandhi
Book Image

Optimizing Your Modernization Journey with AWS

By: Mridula Grandhi

Overview of this book

AWS cloud technologies help businesses scale and innovate, however, adopting modern architecture and applications can be a real challenge. This book is a comprehensive guide that ensures your switch to AWS services is smooth and hitch-free. It will enable you to make optimal decisions to bring out the best ROI from AWS cloud adoption. Beginning with nuances of cloud transformation on AWS, you’ll be able to plan and implement the migration steps. The book will facilitate your system modernization journey by getting you acquainted with various technical domains, namely, applications, databases, big data, analytics, networking, and security. Once you’ve learned about the different operations, budgeting, and management best practices such as the 6 Rs of migration approaches and the AWS Well-Architected Framework, you’ll be able to achieve operational excellence in cloud adoption. You’ll also learn how to deploy some of the important AWS tools and services with real-life case studies and use cases. By the end of this book, you’ll be able to successfully implement cloud migration and modernization on AWS and make decisions that best suit your organization.
Table of Contents (20 chapters)
1
Part 1: Migrating to the Cloud
6
Part 2: Cloud Modernization – Application, Data, Analytics, and IT
12
Part 3: Security and Networking Transformation
15
Part 4: Cloud Economics, Compliance, and Governance

Key characteristics of cloud computing

According to the National Institute of Standards and Technology (NIST), cloud computing has six essential characteristics, as follows:

  • On-demand self-service
  • Wide range of network access
  • Multi-tenant model and resource pooling
  • Rapid elasticity
  • PAYG model
  • Measured service and reporting

We will discuss each of these in the following subsections.

On-demand self-service

In traditional enterprise IT settings, companies used to build the required infrastructure to run their applications locally; that is, on-premises. This means that the enterprises must set up the server’s hardware, software licenses, integration capabilities, and IT employees to support and manage these infrastructure components. Because the software itself resides within an organization’s premises, enterprises are responsible for the security of their data and vulnerability management, which entails training IT staff to be aware of security vulnerabilities and installing updates regularly and promptly.

Cloud computing is different from traditional IT hosting services since consumers don’t have to own the required infrastructure to run their applications. With the cloud, a third-party provider will host and maintain all of this for you. Provisioning, configuring, and managing the infrastructure is automated in the cloud, which reduces the time to streamline activities and make decisions about capacity and performance in real time.

Cloud automation

Cloud automation is the process of automating tasks such as discovering, provisioning, configuring, scaling, deploying, monitoring, and backing up every component within the cloud infrastructure in real time. This involves streamlining tasks without human interaction and caters to the changing needs of your business.

On-demand makes it possible for consumers to benefit from the resources on the cloud as and when required. The cloud supplier caters to the demand in a real-time manner, enabling the consumers to decide on when and how much to subscribe for these resources. The consumers will have full control over this to help meet their evolving needs.

The self-service aspect allows customers to procure and access the services they want instantaneously. Cloud providers facilitate this via simple user portals to make this quick and easy. For example, a cloud consumer can request a new virtual machine and expect it to be provisioned and running within a few minutes. On-premises procurement of the same typically takes 90-120 days and also requires accurate forecasting to purchase the required RAM specifications and the associated hardware for a given business use case.

Wide range of network access

Global reach capability is an essential tenet that makes cloud computing accessible and convenient. Consumers can access cloud resources they need from anywhere, and from any device over the network through standard mechanisms such as authentication and authorization. The availability of such resources from thin or thick client platforms such as tablets, PCs, smartphones, netbooks, personal digital assistants, laptops, and more help the cloud touch every possible end user.

Multi-tenant model and resource pooling

Multi-tenancy is one of the foundational aspects that makes cloud services practical. To help you understand multitenancy, think of the safe-deposit boxes that are located in banks, which are used to store your valuable possessions and documents. These assets are stored in isolated and secure vaults, even though they’re stored in the same location. Bank customers don’t have access to other deposit boxes and are not even aware of or interact with each other. Bank customers rent these boxes throughout their lifetime and use security mechanisms to provide identification and access to their metal boxes. In cloud computing, the term multi-tenancy has a broader meaning, where a single instance of a piece of software runs on a server and serves multiple tenants.

Multi-tenancy

Multi-tenancy is a software architecture in which one or more instances of a piece of software are created and executed on a server that serves multiple, distinct tenants. It also refers to shared hosting, where server resources are divided and leveraged by end users.

The following diagram shows single-tenant versus multi-tenant models, both of which can be used to design software applications:

Figure 1.3 – Single-tenancy versus multi-tenancy

Figure 1.3 – Single-tenancy versus multi-tenancy

As an example of a multi-tenancy model, imagine an end user uploading content to social media application(s) from multiple devices.

Using the multi-tenant model, cloud resources are pooled via resource pooling. The intention behind resource pooling is that the consumers will be provided with ways to choose from an infinite pool of resources on demand. This creates a sense of immediate availability to those resources, without them being bound to any of the limitations of physical or virtual dependencies.

Resource pooling

Resource pooling is a strategy where cloud-based applications dynamically provision, scale, and control resource adjustments at the meta level.

Resource pooling can be used for services that support data, storage, computing, and many more processing technologies, thereby facilitating dynamic provisioning and scaling. This enables on-demand self-service for services where consumers can use these services and change the level of their usage as per their needs. Resource pooling, coupled with automation, replaces the following mechanisms:

  • Traditional mechanisms
  • Labor-intensive mechanisms

With new strategies that rely on increasingly powerful virtual networks and data handling technologies, cloud providers can provide an abstraction for resource administration, thereby enhancing the consumer experience of leveraging cloud resources.

Rapid elasticity

Elasticity is one of the most important factors and experts indicate this as the major selling point for businesses to migrate from their local data centers. End users can take advantage of seamless provisioning because of this setup in the cloud.

What is cloud elasticity? What are the benefits?

Before we answer these questions, let’s take a look at the definition of elasticity.

Elasticity in the cloud refers to the end user’s ability to acquire or release resources automatically to serve the varying needs of a cloud-based application while remaining operational.

Another criterion that is used in the cloud is scalability. Let’s look at what it is and how it differs from cloud elasticity.

Scalability in the cloud refers to the ability to handle the changing needs of on-demand by either adding or removing resources within the infrastructure’s boundaries.

Although the fundamental theme of these two concepts is adaptability, both of these differ in terms of their functions.

Scalability versus Elasticity

Scalability is a strategic resource allocation operation, whereas elasticity is a tactical resource allocation operation. Elasticity is a fundamental characteristic of cloud computing and involves taking advantage of the scalable nature of a specific system.

The inherent nature of dynamically adapting capacity helps businesses handle heavy workloads, as well as ensure that their operations go uninterrupted.

For example, take an online retail shipping website that is experiencing sudden bursts of popularity and their volume of transactions is peaking. To handle the workload, the website can leverage the cloud’s rapid elasticity by adding resources to meet the transaction spikes. When the workloads do not have to meet such peaks, the services can be taken down just as quickly as they were added. You only pay for the services that you use at any given point.

Automatically commissioning and decommissioning resources is inherent to cloud elasticity and can be used to meet the in and out demands of businesses, thereby helping them manage and maintain their operating expenditure (OpEx) costs without having to put in any upfront capital expenditure (CapEx) costs and being locked into any long-term contracts.

PAYG model

The pay-per-use or Pay As You Go (PAYG) pricing model is a major highlight that’s geared toward an economic model for organizations and end users. The per-second billing pricing plans that are provided by the cloud providers make it easy for businesses to witness a major shift from CapEx to OpEx. This enables the businesses to not worry about the upfront capital that they need to spend on on-premises infrastructures and capacity planning to meet ongoing demands. The traditional self-provisioning processes are often prone to extreme inefficiency and waste due to the complex supply chain model, which usually involves seamless communication between decision-makers and stakeholders.

However, cloud-based architectures and their inherent design models allow you to scale up your applications on the cloud during peak traffic and scale back down during periods where they’re not needed as much, without having to worry about annual contracts or long-term license termination fees.

What are CapEx and OpEx?

CapEx involves funds that have been incurred by businesses to acquire and upgrade a company’s fixed assets. This includes expenditures toward setting up the technology, the required hardware and software to run the services, and more.

OpEx involves the expenses that have been incurred by businesses through the course of their normal business operations. Such expenses include property maintenance, inventory costs, funds allocated for research and development, and more.

The businesses witness heavy OpExs when it comes to service and software procurement and management, tasks that are often expensive and inefficient. This model also often leads to complex payment structures and makes it difficult for businesses to fluctuate their usage. With the PAYG model, you pay for the resource charges for user-based services, versus an entire infrastructure. Once you stop using the service, there is typically no fee to terminate, and the billing for that service stops immediately.

Let’s look at an example of how the PAYG model is applied to cloud resources. A user provisioning a cloud compute instance is generally billed for the time that the instance is used. You can add or remove the compute capacity based on your application’s demands and only pay for what you used by the second, depending on the cloud provider you chose.

Measured service and reporting

The ability to measure cloud service usage is an important characteristic to ensure optimum usage and resource spending. This characteristic is key for both cloud providers and end users as they can measure and report on what services have been used and their purpose.

NIST states the following:

“Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (for example, storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.”

The cloud provider’s billing component is mainly dependent on the capability to measure customers’ usage and calculate the billing invoices accordingly. Cloud providers can understand the overall consumption and potentially improve their infrastructure’s and service’s processing speeds and bandwidth.

Businesses get the visibility and transparency they need to utilize their rates costs across large enterprises, which is limited in traditional IT environments. This is especially helpful for usage accounting, reporting, chargebacks, and also for monitoring purposes for their key IT stakeholders. In addition to the billing aspect, rapid elasticity and resource pooling feed into this characteristic, where end users can leverage monitoring and trigger automation to scale their resources.

In this section, we learned about the essential characteristics of cloud computing: on-demand self-service, elasticity, resource pooling, the PAYG model, measured services, CapEx/OpEx, and reporting abilities. In the next section, we look at what makes businesses inclined toward moving to the cloud.