Book Image

Azure Containers Explained

By : Wesley Haakman, Richard Hooper
Book Image

Azure Containers Explained

By: Wesley Haakman, Richard Hooper

Overview of this book

Whether you’re working with a start-up or an enterprise, making decisions related to using different container technologies on Azure has a notable impact your app migration and modernization strategies. This is where companies face challenges, while choosing the right solutions and deciding when to move on to the next technology. Azure Containers Explained helps you make the right architectural choices for your solutions and get well-versed with the migration path to other platforms using practical examples. You’ll begin with a recap of containers as technology and where you can store them within Azure. Next, you’ll explore the different Microsoft Azure container technologies and understand how each platform, namely Azure Container Apps, Azure Kubernetes Service (AKS), Azure Container Instances (ACI), Azure Functions, and Azure App Services, work – you’ll learn to implement them by grasping their respective characteristics and use cases. Finally, you’ll build upon your own container solution on Azure using best practices from real-world examples and successfully transform your business from a start-up to a full-fledged enterprise. By the end of this book, you’ll be able to effectively cater to your business and application needs by selecting and modernizing your apps using various Microsoft Azure container services.
Table of Contents (22 chapters)
1
Part 1: Understanding Azure Container Technologies
8
Part 2: Choosing and Applying the Right Technology
14
Part 3: Migrating Between Technologies and Beyond

Understanding containers and their benefits

Virtualization has been around for a long time, and we can go as far as to say that it is the duct tape that holds infrastructures together. Different platforms provided different features (think of VMware, Hyper-V, and KVM) but all had the same goal: hardware virtualization. We can now run multiple operating systems on a single piece of hardware, isolating them from each other and minimizing overhead. We got used to that. However, it did not answer all the questions or resolve the challenges we had. The world wanted to minimize overhead even more, add more flexibility, and have an answer to the comment, it worked on my machine!

Containers may have been around for much longer than traditional hardware virtualization in different forms, such as the Unix chroot system and FreeBSD Jails, but only became really popular in their current form with the introduction of Docker and the Open Container Initiative (OCI). The OCI was founded by Docker and other leaders in the container ecosystem in June 2015. It is an open source specification to ensure container images can work across multiple container runtimes.

Container technology these days is essentially what we would call operating system virtualization, where we package code, libraries, and the runtime into a container image and run it on top of an operating system, using a container engine such as Docker. To make a comparison with hardware virtualization, you can say that the container engine is the hypervisor for the containers. Of course, there is much more to it when you really get into the nitty and gritty of container technologies, but we don’t need that level of understanding when navigating through the Azure container landscape. Let’s see this in a visual representation.

Figure 1.1 – An overview of containers

Figure 1.1 – An overview of containers

In the preceding diagram, you can see we still have a server, but the capacity is distributed more efficiently. Where traditionally we would run one application per (virtualized) server, with container technology, we can now run multiple isolated containers on a single operating system and minimize overhead even more.

Important note

When we talk about running a container, we are actually running an instance that is based off a container image. The container image actually contains all the code, libraries, and runtime but is more often referred to as a docker container. Throughout this book, we will use the term container when referring to a container instance that is created from a container image.

Container characteristics

These containers have specific characteristics and can be used in multiple ways, each use case coming with its own set of benefits. Let’s take a look at these specific characteristics:

  • Containers are lightweight.
  • Containers are ephemeral.
  • Containers contain everything required from an application perspective and all the -specific binaries that come from the underlying node Operating System (OS).
  • Containers have strong default isolation.
  • Containers contain the same content wherever you run them (working on everyone’s computer).
  • Containers can run on Linux or Windows.

That’s a pretty interesting list, but those characteristics do come with some important side notes.

As containers are lightweight, they won’t take up too many resources, and you can run hundreds of them on a single system. Instead of running hundreds of virtual machines, you are now running just a couple with hundreds of containers. At some point, we need to look at efficiently managing those.

As containers are ephemeral, this has consequences for your solution. We’re talking stateless here. And, by default, containers have strong default isolation. This means, by default, two containers will not communicate with each other. That also has consequences for your solution and software architecture.

These consequences are not all that bad. In fact, if you play by the rules, you will end up with a more scalable, secure, and future-proof solution.

Container benefits

Maybe you could already tell from the previous paragraphs that there are definitely benefits to using container technologies:

  • Containers contain everything you need to run your software.
  • Containers are extremely scalable.
  • Containers don’t have much overhead.
  • Containers are portable.
  • Containers are faster than a traditional virtual machine.

That sounds very interesting (even for the financially minded people out there!). But what does it mean? Well, a container contains everything you need to run your software. Within your container image, you store the parts of the OS you need, the libraries you are using, and, of course, your code. That container image is stored in what we call a registry and can be used whenever you want to start your container. Whether that container is running in the cloud, on your local machine, or in your refrigerator (if it supports it), it will always have the same contents. It works on everyone’s machine.

Having such a small footprint means that containers can be started really quickly but can also be scaled just like that. As containers also have significantly less overhead as compared to traditional configurations, instead of having to deploy multiple virtual machines to host multiple instances of your software, you can now do that by just running a number of small containers on the same machine.

Important note

A container registry is a repository that contains container images that can be pulled by other services to start an instance of a container. Microsoft Azure offers a service called Azure Container Registry that can be integrated into other Azure services.

It is very likely that you are not looking to run all these containers on traditional on-premises hardware, but you want to leverage the global scalability, cost efficiency, redundancy, and security that public clouds such as Microsoft Azure have to offer. And we’re going to look into that right now!