Book Image

Simplifying Hybrid Cloud Adoption with AWS

By : Frankie Costa Negro
Book Image

Simplifying Hybrid Cloud Adoption with AWS

By: Frankie Costa Negro

Overview of this book

The hybrid edge specialty is often misunderstood because it began with an on-premises-focused view encompassing everything not running inside the traditional data center. If you too have workloads that need to live on premises and need a solution to bridge the gap between both worlds, this book will show you how AWS Outposts allows workloads to leverage the benefits of the cloud running on top of AWS technology. In this book, you’ll learn what the Edge space is, the capabilities to look for when selecting a solution to operate in this realm, and how AWS Outposts delivers. The use cases for Outposts are thoroughly explained and the physical characteristics are detailed alongside the service logical constructs and facility requirements. You’ll gain a comprehensive understanding of the sales process—from order placement to rack delivery to your location. As you advance, you’ll explore how AWS Outposts works in real life with step-by-step examples using AWS CLI and AWS Console before concluding your journey with an extensive overview of security and business continuity for maximizing the value delivered by the product. By the end of this book, you’ll be able to create compelling hybrid architectures, solve complex use cases for hybrid scenarios, and get ready for your way forward with the help of expert guidance.
Table of Contents (14 chapters)
1
Part 1: Understanding AWS Outposts – What It Is, Its Components, and How It Works
5
Part 2: Security, Monitoring, and Maintenance
9
Part 3: Maintenance, Architecture References, and Additional Information

Use cases for AWS Outposts

AWS has a method for product development called working backward. This is their approach to innovation and the stepping stones to creating a new solution. There is an excellent talk recorded during re:Invent 2020 about this mechanism, available at https://www.youtube.com/watch?v=aFdpBqmDpzM.

One step of this mechanism involves asking five questions about the customer, as follows:

  • Who is the customer?
  • What is the customer’s problem or opportunity?
  • What is the most important customer benefit?
  • How do you know what your customer needs or wants?
  • How does the experience look?

The process is composed of several steps to assess the opportunity, propose solutions, validate with stakeholders, and finally, build the product roadmap. Permeating all this process is the concept of a use case. Simply put, this consists of exercising a hypothetical scenario to determine how a user interacting with the product can achieve a specific goal.

Use cases are so important because they are the North Star guiding product development. A product must be tailored to address the use case, meaning it will validate the scenario and effectively achieve the pursued goal. A very complex and elaborated product created without a clear purpose can be a display of technical prowess and craftsmanship, and can also carry the risk of not being successful because of the inability to describe what it is good for.

For those positioning AWS Outposts as a solution, this is one of the significant challenges. The first callout should undoubtedly show pictures of the products, either the rack or server, with no detailed explanations prior. This will unequivocally trigger in the minds of IT professionals peeking at AWS Outposts for the first time, “AWS is now selling hardware!” This is not the case and is a very common pitfall.

This is the first opening to make a statement that AWS is not selling hardware. Start by saying the hardware does not belong to the customer – this is not an option. Their minds will switch to thinking it is a hardware rental or leasing contract. This is the time to pull the fully-managed service card; AWS takes care of absolutely everything and the customer does not touch the hardware.

This is the part where customers switch to thinking about the legacy hosting model, believing that AWS is now supplying Hardware as a Service (HaaS). Certainly, there is a taste of this model in AWS Outposts, but the trump card to be played is simply the statement that you can’t just run your platform of choice in it – you only run AWS Services. No commercial hypervisors and no bare metal servers to install on your preferred operating system. It runs the AWS platform.

At this point, you have paved the way to go full throttle into what AWS Outposts aims for at its core: taking the AWS Outposts route effectively means you decided to move to the cloud with AWS. You opened the door for AWS to establish an embassy in your territory to work in close cooperation with you and the crucial reason for this decision should be that you are already looking forward to using AWS services.

If long-term, AWS is not in your equation and you are just looking at AWS Outposts as a potential solution to be used until the next business cycle, where you will re-evaluate cloud providers and look at their similar offerings, trying to make a strong price argument point to justify migrating everything running on Outposts over that other solution, you are potentially treating IT infrastructure as an item in a reverse auction. The cloud provider with the lowest bid wins.

As natural as it may sound, because this is ultimately the market forces in action, it may also cast a shadow on these IT departments, implying, from their point of view, that cloud providers are all the same and there is no real difference, so they can also be treated as commodities. This statement could not be more naïve; choosing a cloud provider is a decision requiring thorough consideration and an extensive amount of work assessing and evaluating their services and capabilities, combined with a long-term view.

From this aspect, AWS does an excellent job at communicating the value proposition of Outposts and helping customers to make an informed decision. AWS believes in working backward from the customer’s requirements and wants to be absolutely sure in understanding who its customers are, what the customer’s problem is, and what the benefit is for the customer. However, the real deal here is that it goes both ways, and AWS also wants to make sure the customer thoroughly understands what selecting AWS Outposts as their answer for the hybrid challenge means.

Outlining the use cases for Outposts, let’s break them down into customer problems and opportunities, following Amazon’s customer obsession method, the second question listed earlier.

Customer problems

These are the reasons and forces preventing or invalidating the cloud as an option for running the workload. In this category, we can cite the following:

  • Latency sensitive applications
  • Local data processing
  • Data residency requirements

Let’s examine each one in detail.

Latency-sensitive applications

The term latency is defined as the time that elapses between a user request and the completion of that request. When a user, application, or system requests information from another system, data packets are sent over the network to a server or system for processing. Once it reaches the destination, it is processed and a response is formed and transmitted back, completing the reply. This process happens many times over, even for a simple operation such as loading a web page on a browser.

There might be several network components involved in order to complete this process and each one adds a tiny delay while forwarding the data packet. Depending on the number of simultaneous transmissions and user requests, the traffic mounts up to a point that these delays become perceptive to the user in the form of wait times. This effect is even worse when the data packets need to traverse long geographical distances.

For the end user requesting information from a website, this translates into a long wait time until the web page finally loads. However, some applications simply rely on low-latency networks to work predictably and smoothly – therefore, this characteristic becomes a requirement. Some applications may require ultra-low latency (measured in nanoseconds, while low latency is measured in milliseconds). Other factors to be taken into consideration are latency jitter (the variation in latency) and network congestion.

Good examples of applications and use cases that require low latency can be found across various industries: life sciences and healthcare, manufacturing automation, and media and entertainment. Use cases encompass content creation, real-time gaming, financial trading platforms, electronic design automation, and machine learning inference at the edge. Let’s cite a few:

  • Healthcare: Surgical devices, Computerized Tomography (CT) scanners, and Linear Accelerators (LINACs)
  • Life sciences: Molecular modeling applications such as GROMACS (https://www.gromacs.org), and 3D analysis software for life sciences and biomedical data
  • Manufacturing: Medical device manufacturing, pharmaceutical and over-the-counter (OTC) manufacturing, integrations with IoT, a digital twin strategy (https://aws.amazon.com/iot-twinmaker/faqs/), Supervisory Control and Data Acquisition (SCADA), Distributed Control Systems (DCSs), Manufacturing Execution Systems (MESs), and engineering workstations
  • Media and entertainment: Content creation and media distribution (streaming)
  • Financial services: Next-generation trading and exchange platforms

Local data processing

Some use cases may end up generating large datasets that need to be processed locally. Because of their size, the cost to migrate them to the cloud may be unfeasible because the back-and-forth of pre- and post-processing data between the cloud and the site may end up generating significant egress charges and can also lead to packet loss, resulting in data integrity problems.

Moreover, the time it would take may be unrealistic for the use case and effectively will defeat its purpose. Additionally, customer requirements may dictate processing data on-premises and the ability to easily move data to the cloud for long-term archiving purposes or workloads that may need to be available during a network outage.

The same types of industries mentioned before have use cases with this requirement:

  • Healthcare: Remote surgery robots, computer vision (for medical image analysis), Picture Archiving and Communication Systems (PACS), Vendor Neutral Archiving (VNA) solutions, and taking emergency actions on patients carrying wearable devices capable of making decisions using inference at the edge
  • Life sciences: Cryo-electron microscopes, genomic sequencers, molecular modeling with 3D visualization (requires GPUs), and Research and Development (R and D)
  • Manufacturing: Smart manufacturing (https://aws.amazon.com/manufacturing/smart-factory/), site optimization, and predictive maintenance

Data residency requirements

Here, let’s briefly examine some of the terminology involved as well. Data residency is the requirement that all customer content must be processed and stored in an IT system that remains within a specific locality’s borders. Data sovereignty is the control and governance over who can and cannot have legal access to data, its location, and its usage.

Various forces are driving this requirement and they are present in many organizations across the public and private sectors. Data residency normally comes from the following:

  • The obligation to meet legal and regulatory demands, including data locality laws. This requirement can affect, for example, financial services, healthcare, oil and gas, and other highly regulated industries having to store all user and transaction data within the country’s borders, or public entities may be subject to a requirement that data produced by local and national government needs to be stored and processed in that country.
  • The organization’s business and operating model, where the majority of activities take place within a certain country’s geography. In this scenario, the company falls under the financial rules of a national entity, which may require storing or processing some or all of its data within that nation state.
  • There may be contractual requirements to store data in a particular country as well. Businesses may have to agree to keep the data of specific customers in a given jurisdiction to meet the data residency requirements of those clients.
  • Lastly, it could be mandated for business or public sector entities that certain data must be stored or processed in a specified location due to corporate policy. This mandate could be partially or fully derived from one of the previous drivers.

As some use cases of storing sensitive data on AWS Outposts, we can cite patient records, medical device intellectual property (IP), – as in, copyrights, trademarks, and patents – government records, genomic data, and proprietary manufacturing info.

Customer opportunities

These are potential ways to use a product that can propel, expedite, or catalyze the cloud as an option to run a workload. They are potential uses of the product that can help businesses to strengthen their arguments for building a hybrid cloud by adding more strategic use cases. Let us look at some of them:

  • Application migration to the cloud
  • Application modernization
  • Data center extension
  • Edge computing

Let’s examine each one in detail.

Application migration to the cloud

This may not be immediately perceived as a potential use case, but it turns out to be a powerful one. Large migrations from on-premises data centers to AWS may involve a myriad of applications and can take several years. The risk involved is tremendous if the environments are significantly different, not to mention the operational burden to use multiple management tools, APIs, and interfaces.

AWS Outposts can significantly mitigate, if not eliminate, this problem. Because it is a portion of AWS, it provides a consistent operational environment across the hybrid cloud while migrating applications to the cloud, ensuring business continuity. Your workloads will not need tweaks and adjustments – if they run on Outposts, they will run in the Region just as well. The only point of attention is the strength and sensitivity of its ties with on-premises services.

This is achieved by employing a strategy called two-step migration. Instead of having to migrate applications and critical dependencies all at once, AWS Outposts offers a safe haven where you can begin migrating in steps to Outposts while keeping close contact with the on-premises applications. This enables customers to slowly move all individual components into Outposts and when they are all together, you can easily move to the region.

Still, in the migration realm, AWS offers a tool to expedite migrations called CloudEndure (https://www.cloudendure.com/) While it is also a disaster recovery tool, CloudEndure allows all migration paths: from on-premises servers (whether physical or virtual) to AWS Outposts, from AWS Regions to Outposts, from other clouds to Outposts, and even from Outposts to Outposts. Recently, AWS launched a new service for migrations called AWS Application Migration Service (https://aws.amazon.com/application-migration-service/), which is the next generation of CloudEndure migration that will remain available until the end of 2022.

Moreover, there is an Outposts flavor that runs VMware Cloud on AWS. VMware customers can easily and seamlessly interoperate and migrate their existing VMware vSphere workloads while benefiting from leveraging their investments on the VMware platform.

Application modernization

Modernizing while you are still on-premises may be the best approach for some workloads that are tightly coupled to the existing infrastructure. There are many opportunities in this area, such as moving legacy monolithic workloads to containers, modernizing mainframe applications, and enabling CI/CD and a DevSecOps approach. AWS Outposts offers the ability to run Amazon ECS or Amazon EKS on-premises to power this transformation.

Modernization with AWS Outposts can be the first step towards the bold objective of re-invention. At this stage, customers can use AWS Lambda at their disposal and explore serverless containers with AWS Fargate for both Amazon ECS and Amazon EKS.

Mainframe modernization stands out from the crowd because of the powerful driving forces behind it. Cost savings is the first and the most obvious, the obsolescence of the platform and the business risk it represents are also there, while the ever-growing shortage of skilled professionals to support this legacy is well-known and the reason for some amusing stories.

One particular driving force that normally falls off the radar is the constraints mainframes impose on businesses preventing them from using modern technologies. Keeping the business locked into the limitations imposed by mainframes can be the poison holding companies off from unlocking their market potential.

Data center extension

In this realm, the infrastructure of your cloud provider is treated as an extension of your on-premises infrastructure. This gives you the ability to support applications that need to run at your data center. There are four broad use-cases:

  • Cloud bursting: In this application deployment model, the workload runs primarily in on-premises infrastructure. If the demand for capacity increases, you branch out and AWS resources are utilized. There are two main reasons triggering the need for cloud bursting:
    • Bursting for compute resources: You consume a burst compute capacity on AWS through Amazon EC2 and managed the container services of Amazon ECS, Amazon EKS, and AWS Fargate.
    • Bursting for storage: In this case, you can integrate your applications with Amazon S3 APIs and leverage AWS Storage Gateway. This offering enables on-premises workloads to use AWS cloud storage, which is exposed to on-premises systems such as network file shares (File Gateway) for file storage or iSCSI targets (Tape Gateway and Volume Gateway) for block storage.
  • Backup and disaster recovery: Customers can leverage the power of object storage with Amazon S3 and use data bridging strategies presented by AWS Storage Gateway, back up their applications with AWS Backup, and move or synchronize data between sites and AWS with AWS DataSync. For disaster recovery strategies based on file data hosted premises that need to be transferred to the AWS cloud, you can leverage AWS Transfer for Secure File Transfer Protocol (SFTP).
  • Distributed data processing: Certain applications can be deployed with functionality split between on-premises data centers and the AWS cloud. In this scenario, we normally expect the low-latency or local data processing components to stay close to the local network on-premises and other components delivering additional functionality to reside on AWS. In the cloud portion, you can benefit from a myriad of services such as massive asynchronous data processing, analytics, compliance, long-term archiving, and machine learning-based inference. These capabilities are powered by services such as AWS Storage Gateway, AWS Backup, AWS DataSync, AWS Transfer Family, Amazon Kinesis Data Firehose, and Amazon Managed Streaming for Apache Kafka (Amazon MSK), which act as enablers to use the imported data as the source for analytics, machine learning, serverless, and containers.
  • Geographic expansion: AWS is constantly expanding and evaluating the feasibility to deploy new Regions across the globe, but it’s unrealistic to expect regions to be deployed to the tune of thousands or even hundreds of locations. It may need to deploy an application in a place where you are still unable to leverage an AWS Region. Eventually, there might be reasons why workloads need to stay close to your end users, such as low latency, data sovereignty, local data processing, or compliance. Traditional approaches such as deploying your own physical infrastructure can become challenging, costly, or constrained by legal requirements and local laws, but AWS Outposts can be instrumental in fulfilling this use case if it is available in that geography. This information is easily accessible on the product FAQ page (https://aws.amazon.com/outposts/rack/faqs/).

Edge computing

Certain environments such as factories, mines, ships, and windmills may have edge computing needs. Outposts addresses this use case with its smallest form factor, Outposts servers – these scenarios are unlikely to be addressed with the Outposts rack. However, the connection requirement to one parent region is still there. When the requirements specifically involve harsh conditions, disconnected operation, or air-gapped environments, customers can use AWS Snowball Edge computing. These ruggedized devices are capable of operating while fully disconnected and use Amazon EC2 compute resources to perform analytics, machine learning, and run traditional IT workloads at the edge. Data can be preprocessed locally and further transferred to the AWS cloud for subsequent advanced analysis and durable retention.

Another edge computing offering is AWS IoT Greengrass, which you can run on Outposts servers. Edge applications generate data that may need to be consumed locally to identify events and trigger a near real-time response from onsite equipment and devices. With AWS IoT Greengrass, you can deploy Lambda functions to core devices using resources such as cameras, serial ports, or GPUs. Applications on these devices will be able to quickly retrieve and process local data while remaining operational, withstanding fluctuations in connectivity to the cloud. You can optimize the cost of running apps deployed at the edge using AWS IoT Greengrass to analyze locally before forwarding to the cloud.

Closing this use cases section, it is worth highlighting the uniqueness of AWS Outposts. This is a product designed with one clear statement in mind: it must be, as much as possible, a portion of an AWS data center stretching out to a customer facility. This paradigm drove product development and will drive product evolution. Anyone using AWS Outposts expects nothing other than AWS technology and this expertise being applied to the product so it can become increasingly valuable.

If we look at the pace of the innovation of AWS, how innovative and visionary their teams are, and how resolute AWS is in advancing with speed and strength without being careless or resting on its laurels, we can safely say that the best is yet to come.