Book Image

Simplifying Hybrid Cloud Adoption with AWS

By : Frankie Costa Negro
Book Image

Simplifying Hybrid Cloud Adoption with AWS

By: Frankie Costa Negro

Overview of this book

The hybrid edge specialty is often misunderstood because it began with an on-premises-focused view encompassing everything not running inside the traditional data center. If you too have workloads that need to live on premises and need a solution to bridge the gap between both worlds, this book will show you how AWS Outposts allows workloads to leverage the benefits of the cloud running on top of AWS technology. In this book, you’ll learn what the Edge space is, the capabilities to look for when selecting a solution to operate in this realm, and how AWS Outposts delivers. The use cases for Outposts are thoroughly explained and the physical characteristics are detailed alongside the service logical constructs and facility requirements. You’ll gain a comprehensive understanding of the sales process—from order placement to rack delivery to your location. As you advance, you’ll explore how AWS Outposts works in real life with step-by-step examples using AWS CLI and AWS Console before concluding your journey with an extensive overview of security and business continuity for maximizing the value delivered by the product. By the end of this book, you’ll be able to create compelling hybrid architectures, solve complex use cases for hybrid scenarios, and get ready for your way forward with the help of expert guidance.
Table of Contents (14 chapters)
Part 1: Understanding AWS Outposts – What It Is, Its Components, and How It Works
Part 2: Security, Monitoring, and Maintenance
Part 3: Maintenance, Architecture References, and Additional Information

Defining hybrid, edge, and rugged edge IT spaces

Amazon as an enterprise has rapidly evolved from the challenges it endured that could neither have been addressed technically nor economically at the time when running large-scale applications. This fact may have led us to conclude that AWS was unlikely to develop a product that would resemble a traditional server rack you could find in any regular data center.

As with any market or industry, things change. New technologies arise, paradigms shift, and new trends pose new challenges and require new solutions. It was no different from the way enterprises consume corporate IT services. In the past couple of years, the hybrid phenomenon gained a lot of momentum to become one of the preferred ways for enterprises to run their business.

This is not strange by any means. You have the start-up sector, which is cloud-native and certainly does not see any reason to have a physical infrastructure of its own. Start-up companies only need a solid connection to the internet and personal equipment to carry out the development work and the administrative work with cloud providers.

At the other end of the spectrum, we had companies over the past few decades doing IT the traditional way, operating fully on-premises. But in recent years, the market has developed the perception that it does not need to be one way or the other.

Back then, your option to run IT infrastructure outside your own local data center relied on offerings from third-party specialized data center providers. Offerings such as hosting, location, and co-location were extremely popular at the time, and are still available today. If you could order a good leased line to connect your site(s) with the provider’s site, any of the options would be available to you.

At best, you could have one of these providers supplying and managing all the necessary equipment to run your business while leveraging the OPEX financial model. Your IT team would take care of services and Line-Of-Business (LOB) applications and you would be in business. For some companies, the CAPEX model made sense as purchased equipment became assets for the company and added value to the balance sheets.

Times change and the advent of the cloud challenged the constraints and limitations of traditional data centers. Andrew Jassy, currently the president and CEO of Amazon, in an interview for the site TechCrunch (, described how AWS was conceived to be the Operating System for the internet at its inception, designed to reliably run applications and services at massive scale.

When AWS came to life in 2006, it wasn’t too clear that it would become what it is today. From humble beginnings with no clear ambition and marketing to turning into a cloud behemoth just 15 years later, AWS and the cloud were just yet another technology trend that remained to prove reliable and solid. The early adopters pioneered the new cloud paradigm and got their feet wet with infrastructure and services that existed beyond their reach when they could not even schedule a visit to the data center.

Adopting cloud services was an exercise of a dual IT landscape design. Either one given service lived on-premises or lived in the cloud. The connection between the two was basically to exchange data for migration or backup and, eventually, very simple interaction between systems with multiple components. It was difficult to consider a three-tier architecture where one of the tiers would sit in the cloud with the other on-premises. Internet bandwidth was scarce, connections were not strongly reliable, and you often had to resort to VPNs for security because it was challenging to procure dedicated links to directly connect with cloud providers at the time.

As the cloud trend reached critical mass and established itself as a valid path, businesses faced a new reality: the cloud had to be considered within their technology plans and a thorough assessment of the IT landscape was necessary to devise a strategy that could somehow encompass cloud offerings and to a serious extent. A vague statement about the cloud just being hype was no longer acceptable to business owners – it was here to stay.

This new way of consuming corporate IT services was dubbed hybrid cloud and described as a combination of cloud services running alongside the traditional on-premises data center solutions. Not surprisingly, the point of view of this model was oriented from the data center out into the outside world, stretching toward the cloud, because it was primarily articulated by on-premises infrastructure providers whose vision centered around the traditional model.

The possibility of a business going all in with the cloud while shutting down all traditional data centers was somewhat far-fetched, but it was delineated as a real alternative. While it is clear that not all workloads will be a fit for the cloud and some may remain on-premises, a significant shift of IT infrastructure to the cloud can realistically be envisioned.

Further developments in this trend revealed that one piece of the puzzle was missing. If considered as a binary choice, an on-premises data center versus the cloud, any move could be a significant risk because there was no middle ground. IT teams were facing an all-or-nothing situation where systems with multiple components would have to be moved as a whole, likely in one go.

Evaluating how a system would perform when running on the cloud was complex because tests had to be carried out in terms of production size and capacity without close contact with all other surrounding systems and services. Even with extensible tests, a cutover date was an event of high significance, full of anxiety, and likely to have a long maintenance window. Clearly, an intermediary infrastructure bridging both worlds would be beneficial.

Initial attempts to fill this gap were made by traditional software providers, offering solutions to be run on-premises that used the type of technologies and solutions offered by cloud providers. This was the private cloud – one attempt to bring the cloud operational model to customer on-premises data centers. Running on their own infrastructure at their data centers or co-location sites, the promise was to leverage cloud-like services and technologies at your facility or closer to you.

It was a good approach and makes good sense. IT teams can become familiar with cloud technologies and how system operations are carried out in the cloud while relatively comfortable at home with their own equipment, learning at their own pace. As IT professionals became familiar with the cloud model, the transition to a cloud provider could be facilitated as the value and challenges became clearer.

Even with a good portion of the market leveraging the private cloud offering, there was still the inescapable fact that on-premises, you could not leverage the cloud-specific services and technologies. Moreover, you would never benefit from the scalability and economies of scale offered by cloud providers. It was you running cloud-like services and still managing the necessary infrastructure.

Cloud adoption has gained significant momentum in recent years and we can see now how start-up companies are said to be born in the cloud or cloud natives. These businesses would have never considered creating their products and applications using the on-premises infrastructure. Such offerings would not be possible if they were conceived within the limitations and paradigms of traditional technologies.

Systems have become increasingly complex, made up of many moving parts as opposed to the monolithic approach of yesterday. Technologies favored distributed systems and highly specialized and smaller microservices. This movement highly favored the appeal of the cloud, built on top of pay-per-use, faster innovation, elasticity, and scale. For more information, refer to this video (, The Six Main Benefits of Cloud Computing.

Fast forward to today and considering the latest world developments, the cloud has completely solidified its position and, to be fair, has exploded in adoption, which was significantly accelerated because of the challenges imposed by recent events such as the pandemic. The cloud model was battle-tested and made it through, to the point that it became the de facto standard model to be considered the foundation of technology.

While the future of the cloud seems to be clear skies, there is another fact that still holds: the vast majority of IT spending is still on traditional infrastructure and data centers. While this seems to be a wonderful opportunity to thrive in a market where the largest chunk of business is yet to be conquered, it also means that the missing key piece to act as the catalyst for the widespread adoption of the cloud is more crucial than ever.

As the next step toward blurring the boundaries between the cloud and the so-called physical world, the concept of a hybrid has been redefined. Hybrid is considered to be this enabler, the indistinguishable middle ground where on-premises and the cloud live together in a harmonic symbiosis where both parties benefit from each other. To amplify that notion, the term edge was added to the vernacular.

What we are now seeing is the original hybrid concept in reverse. Now, it originates in the cloud and branches out to the world in the form of edge nodes, where any given data center is considered to be one of these nodes. Effectively, the cloud aims to be everywhere, encompassing all kinds of businesses and places, powered by the recent advancements in high-speed wireless connectivity through 5G networks and IoT devices and sensors.

To make it clearer, an edge node is considered to be anywhere you could run some form of computing, be it large, small, or tiny. Naturally, a family house, a hospital, a restaurant, a crop field, an underground mine, and a cargo ship are significantly different places in nature. Suitability to accommodate electronic components and connectivity conditions change radically and the mileage of the IT equipment running will vary.

To describe these components better when deployed in harmful and aggressive environments, this space is conceptualized as the rugged edge, where equipment must withstand harsh usage conditions and must incorporate design characteristics and features that allow prolonged, normal operation under those circumstances. Equipment built for this purpose boasts specs that allow for severe thermal, mechanical, and environmental conditions.

Today, cloud companies are challenging themselves to create technologies that will propel the ultra-connected world where technology is pervasive, data is collected massively everywhere, and information is nearly real-time. Hybrid solutions play a fundamental role in this game, paving the way for cloud providers to extend all over the world and become the infrastructure, not one infrastructure.