Book Image

Cloud Computing Demystified for Aspiring Professionals

By : David Santana
5 (1)
Book Image

Cloud Computing Demystified for Aspiring Professionals

5 (1)
By: David Santana

Overview of this book

If you want to upskill yourself in cloud computing domains to thrive in the IT industry, then you’ve come to the right place. Cloud Computing Demystified for Aspiring Professionals helps you to master cloud computing essentials and important technologies offered by cloud service providers needed to succeed in a cloud-centric job role. This book begins with an overview of transformation from traditional to modern-day cloud computing infrastructure, and various types and models of cloud computing. You’ll learn how to implement secure virtual networks, virtual machines, and data warehouse resources including data lake services used in big data analytics — as well as when to use SQL and NoSQL databases and how to build microservices using multi-cloud Kubernetes services across AWS, Microsoft Azure, and Google Cloud. You'll also get step-by-step demonstrations of infrastructure, platform, and software cloud services and optimization recommendations derived from certified industry experts using hands-on tutorials, self-assessment questions, and real-world case studies. By the end of this book, you'll be ready to successfully implement cloud computing standardized concepts, services, and best practices in your workplace.
Table of Contents (23 chapters)
1
Part 1: The Journey to Cloud Computing
5
Part 2: Implementing Cloud Deployment Models
9
Part 3: Cloud Infrastructure Services in Action
13
Part 4: Administrating Database and Security on the Cloud
18
Part 5: Roadmap for a Successful Journey in Cloud Engineering

The advent of cloud computing

In this section, I will introduce, describe, and define virtualization types and vendors, and I will describe how virtualization is different from physical servers. Then, I will explore the distributed computing API architecture. I will also describe how demand has driven technology. Finally, I will define cloud computing models.

This section’s objectives are the following:

  • From physical to virtual
  • Virtualization contributions by vendor
  • Distributed computing APIs
  • Exponential growth

From physical to virtual

Cloud computing technology emerged from a multitude of innovations and computing requirements. This emergence included computer science technology advancements that leveraged the underpinnings of mainframe computing, which changed the way we do business. Let us not forget the fickle customer service-level expectations related to IT BC.

The mainframe system features and architecture topology would be one of the important legacy technologies that, through several joint ventures from various stakeholders and evolution, contributed to the advent of cloud computing.

As described in the Genesis section, CP-40 provided a VM environment. Mainframes such as IBM’s 360-hosted CP-40, which supported multiple cloud computing engineer VM operating system instances—arguably, are the very first hardware virtualization prototype.

Let us define virtualization first before we explain how Amazon, Microsoft, and Google use this underpinning to drive their ubiquitous services.

In the Genesis section, we saw how achievements in virtualization technology played an important role in the emergence of cloud computing. Understanding the intricacies of VMs—arguably referred to as “server virtualization”—is critical in the grand scheme of things.

Virtualization abstracts physical infrastructure resources, which support running one or more VMs guest operating systems that resemble a similar or different computer operating system on one physical computer host. This approach was pioneered in the 1960s by IBM. IBM developed several products, such as CP-40 and CP-67, arguably the very first virtualization technologies. While virtualization is one of the key technologies in the advent of cloud computing, this book will not delve into virtualization implementations, such as hardware-assisted virtualization, paravirtualization, and operating system-level virtualization.

Over the years, many technology-driven companies have developed different virtualization offerings of varying types.

Virtualization contributions by vendor

VMware is a technology company known for virtualization. VMware launched VMware Workstation in the ’90s, heralding virtualization software that allowed users to run one or more instances of x86 or x86-64 operating systems on a single personal device.

Xen is another technology company known for developing hypervisors that support multiple computer operating systems running on the same hardware concurrently.

Citrix is a virtualization technology company that offers several virtualization products, such as XenApp (app virtualization), which supports XenDesktop (desktop virtualization). There is even a product for Apple devices that hosts Microsoft Windows desktops virtually. Citrix also offers XenServer, which delivers server virtualization. Additionally, Citrix offers the NetScaler product suite: in particular, software-defined wide area networking (SD-WAN), NetScaler SDX, and VPX networking appliances that support virtual networking.

Microsoft, known for its personal and business computer software, has contributed as well to virtualization. Microsoft started offering application virtualization products and services. Microsoft’s App-V delivered application virtualization, and soon thereafter, Microsoft developed Hyper-V, which supported server virtualization.

There are many more organizations that, through acquisition or development, have contributed to modern advancements in various virtualization nuances that are the foundation of cloud computing wonders today. But I would be remiss if I didn’t elaborate on the ubiquitous cloud’s distributed nature—or, more accurately denoted, distributed computing architecture.

Distributed computing APIs

Distributed computing, also known as distributed systems, rose out of the ’60s, and its earliest successful implementation was the ARPANET email infrastructure. Distributed computing architectures categorically are labeled as loosely coupled or tightly coupled architectures. Architectures such as client-server are the most known and were prevalent during the traditional mainframe era. N-tier or three-tier architectures provide many of today’s modern cloud computing service architecture characteristics: in particular, sending message requests to middle-tier services that queue requests for other consuming services—for example, in a three-tier web, application, and database server architecture. The application server or application queue-like service would be the middle-tier service, and then queue input messages for other distributed programs on the same server to consume (input) and if required send (output). Another aspect of distributed computing architectures is that of peer-to-peer (P2P), where all clients are peers that can provide either client or server functionality. Each peer or service communicates asynchronously, contains local memory, and can act autonomously. Distributed system architectures deliver cost efficiency and increased reliability. Cloud computing SPs offer distributed services that are loosely coupled, delivering cost-efficient infrastructure resources as a service. This is also due to distributed systems utilizing low-end hardware systems. The top three cloud computing providers are decreasing, if not eliminating single points of failure (SPOFs), consequently providing highly available resources in an SOA. These characteristics are derived from distributed computing.

Exponential growth

The rise of cloud computing is also arguably due to the exponential growth of the IT industry. This has a direct correlation with HA, scalability, and BC in the event of planned or unplanned failures. This growth also resulted in mass increases in energy consumption.

Traditional IT computing infrastructures must procure their own hardware as capital expenses. Additionally, they must encounter operating expenses, which include maintaining the computer operating systems and the operational costs incurred by human services. Here is something to ponder—variable operational costs and fixed capital investments are to be expected. Fixed or capital costs are upfront, which could be lowered by increasing the number of users. However, the operational costs may increase quickly with a larger number of users. Consequently, the total cost increases rapidly with the number of users. Modern IT computing infrastructures such as the cloud offer a pay-per-use model, which provides cloud computing engineers and architects greater control over operational expenditures not feasible in a traditional data center.

Meeting the demands of HA and the capability to scale becomes more and more important due to the growth of the IT industry. Enterprise data centers, which are operated and managed by the corporation’s IT department, are known to procure expensive brand-name hardware and networking devices due to their traditional implementation and familiarity. However, cloud architectures are built with commodity hardware and network devices. Amazon, Microsoft, and Google platforms choose low-cost disks and Ethernet to build their modular data centers. Cloud designs emphasize the performance/price ratio rather than the performance alone.

As the number of global internet users continues to rise, so too has the demand for data center services, giving rise to concerns regarding growing data center energy utilization. The quantity of data traversing the internet has increased exponentially, while global data center storage capacity has increased by several factors.

These growth trends are expected to continue as the world consumes more and more data. In fact, energy consumption is one of the main contributors to on-premises capital and operational expenses.

Inevitably this leads to rising concern in electricity utilization, consequently voicing concerns over environmental issues, such as carbon dioxide (CO2) emissions. Knowing the electricity use of data centers provides a useful benchmark for testing theories about the CO2 implications of data center services.

The cost of energy produced by IT devices impacts environmental and economic standards. Industrialized countries such as the US consume more energy than non-industrialized ones. The IT industry is essential to the global economy and plays a role in every sector and industry. Due to the frequency of IT usage, this will no doubt continue to increase demand, which makes it important that we consider designing an eco-friendly infrastructure architecture.

On-premises data centers, which are also referred to as enterprise data center types, require IT to handle and manage everything, including purchasing and installing the hardware, virtualization, operating system, and applications, and setting up the network, network firewall devices, and secure data storage. Furthermore, IT is responsible for maintaining the infrastructure hardware throughout an LOB app’s life cycle. This imposes both significant upfront costs for the hardware and ongoing data center operating costs due to patching. Don’t forget—you should also factor in paying for resources regardless of utilization.

Cloud computing provides an alternative to the on-premises data center. Amazon, Microsoft, and Google cloud providers are responsible for hardware procurement and overall maintenance costs and provide a variety of services you can use. Lease whatever hardware capacity and services you need for your LOB application, only when required, thus converting what had been a capital expense or fixed into an operational expense. This allows the cloud computing engineer to lease hardware capacity and deliver modern software services that would be too expensive to purchase traditionally.