Book Image

Hands-On Linux Administration on Azure - Second Edition

By : Kamesh Ganesan, Rithin Skaria, Frederik Vos
Book Image

Hands-On Linux Administration on Azure - Second Edition

By: Kamesh Ganesan, Rithin Skaria, Frederik Vos

Overview of this book

Thanks to its flexibility in delivering scalable cloud solutions, Microsoft Azure is a suitable platform for managing all your workloads. You can use it to implement Linux virtual machines and containers, and to create applications in open source languages with open APIs. This Linux administration book first takes you through the fundamentals of Linux and Azure to prepare you for the more advanced Linux features in later chapters. With the help of real-world examples, you’ll learn how to deploy virtual machines (VMs) in Azure, expand their capabilities, and manage them efficiently. You will manage containers and use them to run applications reliably, and in the concluding chapter, you'll explore troubleshooting techniques using a variety of open source tools. By the end of this book, you'll be proficient in administering Linux on Azure and leveraging the tools required for deployment.
Table of Contents (14 chapters)
13
Index

Fundamentals of Cloud Computing

When you first start learning a new subject in Information Technology (IT), you'll usually begin by studying the underlying concepts (that is, the theory). You'll then familiarize yourself with the architecture, and sooner or later you'll start playing around and getting hands-on to see how it works in practice.

However, in cloud computing, it really helps if you not only understand the concepts and the architecture but also where it comes from. We don't want to give you a history lesson, but we want to show you that inventions and ideas from the past are still in use in modern cloud environments. This will give you a better understanding of what the cloud is and how to use it within your organization.

The following are the key fundamentals of cloud computing:

  • Virtualization
  • Software-Defined Datacenter (SDDC)
  • Service-Oriented Architecture (SOA)
  • Cloud services
  • Cloud types

Let's take a look at each of these and understand what these terms refer to.

Virtualization

In computing, virtualization refers to the creation of a virtual form of a device or a resource, such as a server, storage device, network, or even an operating system. The concept of virtualization came into the picture when IBM developed its time-sharing solutions in the late 1960s and early 1970s. Time-sharing refers to the sharing of computer resources between a large group of users, increasing the productivity of users and eliminating the need to purchase a computer for each user. This was the beginning of a revolution in computer technology where the cost of purchasing new computers reduced significantly and it became possible for organizations to utilize the under-utilized computer resources they already had.

Nowadays, this type of virtualization has evolved into container-based virtualization. Virtual machines have their own operating system, which is virtualized on top of a physical server; on the other hand, containers on one machine (either physical or virtual) all share the same underlying operating system. We will talk more about containers in Chapter 9, Container Virtualization in Azure.

Fast-forward to 2001, when another type of virtualization, called hardware virtualization, was introduced by companies such as VMware. In their product, VMware Workstation, they added a layer on top of an existing operating system that provided a set of standard hardware and built-in software instead of physical elements to run a virtual machine. This layer became known as a hypervisor. Later, they built their own operating system, which specialized in running virtual machines: VMware ESXi (formerly known as ESX).

In 2008, Microsoft entered the hardware-virtualization market with the Hyper-V product, as an optional component of Windows Server 2008.

Hardware virtualization is all about separating software from hardware, breaking the traditional boundaries between hardware and software. A hypervisor is responsible for mapping virtual resources on physical resources.

This type of virtualization was the enabler for a revolution in datacenters:

  • Because of the standard set of hardware, every virtual machine can run on any physical machine where the hypervisor is installed.
  • Since virtual machines are isolated from each other, if a particular virtual machine crashes, it will not affect any other virtual machine running on the same hypervisor.
  • Because a virtual machine is just a set of files, you have new possibilities for backup, moving virtual machines, and so on.
  • New options became available to improve the availability of workloads, with high availability (HA), and the possibility to migrate a virtual machine, even if it's still running.
  • New deployment options also became available, for example, working with templates.
  • There were also new options for central management, orchestration, and automation because it's all software defined.
  • Isolation, reservation, and the limiting of resources where needed, sharing resources where possible.

SDDC

Of course, if you can transform hardware into software for compute, it's only a matter of time before someone realizes you can do the same for network and storage.

For networking, it all started with the concept of virtual switches. Like every other form of hardware virtualization, it is nothing more than building a network switch in software instead of hardware.

The Internet Engineering Task Force (IETF) started working on a project called Forwarding and Control Element Separation, which was a proposed standard interface to decouple the control plane and the data plane. In 2008, the first real switch implementation that achieved this goal took place using the OpenFlow protocol at Stanford University. Software-Defined Networking (SDN) was commonly associated with the OpenFlow protocol.

Using SDN, you have similar advantages as in compute virtualization:

  • Central management, automation, and orchestration
  • More granular security through traffic isolation and providing firewall and security policies
  • Shaping and controlling data traffic
  • New options available for HA and scalability

In 2009, Software-Defined Storage (SDS) development started at several companies, such as Scality and Cleversafe. Again, it's about abstraction: decoupling services (logical volumes and so on) from physical storage elements.

If you have a look into the concepts of SDS, some vendors added a new feature to the already existing advantages of virtualization. You can add a policy to a virtual machine, defining the options you want: for instance, replication of data or a limit on the number of Input/Output Operations per Second (IOPS). This is transparent for the administrator; there is communication between the hypervisor and the storage layer to provide the functionality. Later on, this concept was also adopted by some SDN vendors.

You can actually see that virtualization slowly changed the management of different datacenter layers into a more service-oriented approach.

If you can virtualize every component of a physical datacenter, you have an SDDC. The virtualization of networking, storage, and compute functions made it possible to go further than the limits of one piece of hardware. SDDC makes it possible, by abstracting the software from the hardware, to go beyond the borders of a physical datacenter.

In an SDDC environment, everything is virtualized and often fully automated by the software. It totally changes the traditional concept of datacenters. It doesn't really matter where the service is hosted or how long it's available (24-7 or on demand). Also, there are possibilities to monitor the service, perhaps even adding options such as automatic reporting and billing, which all make the end user happy.

SDDC is not the same as the cloud, not even a private cloud running in your datacenter, but you could argue that, for instance, Microsoft Azure is a full-scale implementation of SDDC—Azure is, by definition, software-defined.

SOA

In the same period that hardware virtualization became mainstream in datacenters and the development of SDN and SDS started, something new appeared in the world of software development for web-based applications: SOA, which offers several benefits. Here are some of the key points:

  • Minimal services that can talk to each other, using a protocol such as Simple Object Access Protocol (SOAP). Together, they deliver a complete web-based application.
  • The location of the service doesn't matter; the service must be aware of the presence of the other service, and that's about it.
  • A service is a sort of black box; the end user doesn't need to know what's inside the box.
  • Every service can be replaced by another service.

For the end user, it doesn't matter where the application lives or that it consists of several smaller services. In a way, it's like virtualization: what seems to be one physical resource, for instance, a storage LUN (Logical Unit Number) could actually include several physical resources (storage devices) in multiple locations. As mentioned earlier, if one service is aware of the presence of another service (it could be in another location), they'll act together and deliver the application. Many websites that we interact with daily are based on SOA.

The power of virtualization combined with SOA gives you even more options in terms of scalability, reliability, and availability.

There are many similarities between the SOA model and SDDC, but there is a difference: SOA is about the interaction between different services; SDDC is more about the delivery of services to the end user.

The modern implementation of SOA is microservices, provided by cloud environments such as Azure, running standalone or running in virtualization containers such as Docker.

Cloud Services

Here's that magic word: cloud. A cloud service is any service available to organizations, companies, or users provided by a cloud solution or computing provider such as Microsoft Azure. Cloud services are appropriate if you want to provide a service that:

  • Is highly available and always on demand.
  • Can be managed via self-service.
  • Has scalability, which enables a user to scale up (making the hardware stronger) or scale out (adding additional nodes).
  • Has elasticity – the ability to dynamically expand or shrink the number of resources based on business requirements.
  • Offers rapid deployment.
  • Can be fully automated and orchestrated.

On top of that, there are cloud services for monitoring your resources and new types of billing options: most of the time, you only pay for what you use.

Cloud technology is about the delivery of a service via the internet, in order to give an organization access to resources such as software, storage, network, and other types of IT infrastructure and components.

The cloud can offer you many service types. Here are the most important ones:

  • Infrastructure as a Service (IaaS): A platform to host your virtual machines. Virtual machines deployed in Azure are a good example of this.
  • Platform as a Service (PaaS): A platform to develop, build, and run your applications, without the complexity of building and running your own infrastructure. For example, there is Azure App Service, where you can push your code and Azure will host the infrastructure for you.
  • Software as a Service (SaaS): Ready-to-go applications, running in the cloud, such as Office 365.

Even though the aforementioned are the key pillars of cloud services, you might also hear about FaaS (Function as a Service), CaaS (Containers as a Service), SECaaS (Security as a Service), and the list goes on as the number of service offerings in the cloud increases day by day. Function App in Azure would be an example for FaaS, Azure Container Service for CaaS, and Azure Active Directory for SECaaS.

Cloud Types

Cloud services can be classified based on their location or based on the platform the service is hosted on. As mentioned in the previous section, based on the platform, we can classify cloud offerings as IaaS, PaaS, SaaS, and so on; however, based on location, we can classify cloud as:

  • Public cloud: All services are hosted by a service provider. Microsoft's Azure is an implementation of this type.
  • Private cloud: Your own cloud in your datacenter. Microsoft recently developed a special version of Azure for this: Azure Stack.
  • Hybrid cloud: A combination of a public and private cloud. One example is combining the power of Azure and Azure Stack, but you can also think about new disaster recovery options or moving services from your datacenter to the cloud and back if more resources are temporarily needed.
  • Community cloud: A community cloud is where multiple organizations work on the same shared platform, provided that they have similar objectives or goals.

Choosing one of these cloud implementations depends on several factors; to name just a few:

  • Costs: Hosting your services in the cloud can be more expensive than hosting them locally, depending on resource usage. On the other hand, it can be cheaper; for example, you don't need to implement complex and costly availability options.
  • Legal restrictions: Some organizations would not be able to use the public cloud. For example, the US Government has its own Azure offering called Azure Government. Likewise, Germany and China have their own Azure offerings.
  • Internet connectivity: There are still countries where the necessary bandwidth or even the stability of the connection is a problem.
  • Complexity: Hybrid cloud environments, in particular, can be difficult to manage; support for applications and user management can be challenging.