Book Image

Learn OpenShift

By : Denis Zuev, Artemii Kropachev, Aleksey Usov
Book Image

Learn OpenShift

By: Denis Zuev, Artemii Kropachev, Aleksey Usov

Overview of this book

Docker containers transform application delivery technologies to make them faster and more reproducible, and to reduce the amount of time wasted on configuration. Managing Docker containers in the multi-node or multi-datacenter environment is a big challenge, which is why container management platforms are required. OpenShift is a new generation of container management platforms built on top of both Docker and Kubernetes. It brings additional functionality to the table, something that is lacking in Kubernetes. This new functionality significantly helps software development teams to bring software development processes to a whole new level. In this book, we’ll start by explaining the container architecture, Docker, and CRI-O overviews. Then, we'll look at container orchestration and Kubernetes. We’ll cover OpenShift installation, and its basic and advanced components. Moving on, we’ll deep dive into concepts such as deploying application OpenShift. You’ll learn how to set up an end-to-end delivery pipeline while working with applications in OpenShift as a developer or DevOps. Finally, you’ll discover how to properly design OpenShift in production environments. This book gives you hands-on experience of designing, building, and operating OpenShift Origin 3.9, as well as building new applications or migrating existing applications to OpenShift.
Table of Contents (24 chapters)

Containers overview

Traditionally, software applications were developed following a monolithic architecture approach, meaning all the services or components were locked to each other. You could not take out a part and replace it with something else. That approach changed over time and became the N-tier approach. The N-tier application approach is one step forward in container and microservices architecture.

The major drawbacks of the monolith architecture were its lack of reliability, scalability, and high availability. It was really hard to scale monolith applications due to their nature. The reliability of these applications was also questionable because you could rarely easily operate and upgrade these applications without any downtime. There was no way you could efficiently scale out monolith applications, meaning you could not just add another one, five, or ten applications back to back and let them coexist with each other.

We had monolith applications in the past, but then people and companies started thinking about application scalability, security, reliability, and high availability (HA). And that is what created N-tier design. The N-tier design is a standard application design like 3-tier web applications where we have a web tier, application tier, and database backend. It's pretty standard. Now it is all evolving into microservices. Why do we need them? The short answer is for better numbers. It's cheaper, much more scalable, and secure. Containerized applications bring you to a whole new level and this is where you can benefit from automation and DevOps.

Containers are a new generation of virtual machines. That brings software development to a whole new level. Containers are an isolated set of different rules and resources inside a single operating system. This means that containers can provide the same benefits as virtual machines but use far less CPU, memory, and storage. There are several popular container providers including LXC, Rockt, and Docker, which we are going to focus on this book.

Container features and advantages

This architecture brings a lot of advantages to software development.

Some of the major advantages of containers are as follows:

  • Efficient hardware resource consumption
  • Application and service isolation
  • Faster deployment
  • Microservices architecture
  • The stateless nature of containers

Efficient hardware resource consumption

Whether you run containers natively on a bare-metal server or use virtualization techniques, using containers allows you to utilize resources (CPU, memory, and storage) in a better and much more efficient manner. In the case of a bare-metal server, containers allow you to run tens or even hundreds of the same or different containers, providing better resource utilization in comparison to usually one application running on a dedicated server. We have seen in the past that some server utilization at peak times is only 3%, which is a waste of resources. And if you are going to run several of the same or different applications on the same servers, they are going to conflict with each other. Even if they work, you are going to face a lot of problems during day-to-day operation and troubleshooting.

If you are going to isolate these applications by introducing popular virtualization techniques such as KVM, VMware, XEN, or Hyper-V, you will run into a different issue. There is going to be a lot of overhead because, in order to virtualize your app using any hypervisor, you will need to install an operating system on top of your hypervisor OS. This operating system needs CPU and memory to function. For example, each VM has its own kernel and kernel space associated with it. A perfectly tuned container platform can give you up to four times more containers in comparison to standard VMs. It may be insignificant when you have five or ten VMs, but when we talk hundreds or thousands, it makes a huge difference.

Application and service isolation

Imagine a scenario where we have ten different applications hosted on the same server. Each application has a number of dependencies (such as packages, libraries, and so on). If you need to update an application, usually it involves updating the process and its dependencies. If you update all related dependencies, most likely it will affect the other application and services. It may cause these applications not to work properly. Sure, to a degree these issues are addressed by environment managers such as virtualenv for Python and rbenv/rvm for Ruby—and dependencies on shared libraries can be isolated via LD_LIBRARY_PATH—but what if you need different versions of the same package? Containers and virtualization solve that issue. Both VMs and containers provide environment isolation for your applications.

But, in comparison to bare-metal application deployment, container technology (for example, Docker) provides an efficient way to isolate applications, and other computer resources libraries from each other. It not only provides these applications with the ability to co-exist on the same OS, but also provides efficient security, which is a big must for every customer-facing and content-sensitive application. It allows you to update and patch your containerized applications independently of each other.

Faster deployment

Using container images, discussed later in this book, allows us speed up container deployment. We are talking about seconds to completely restart a container versus minutes or tens of minutes with bare-metal servers and VMs. The main reason for this is that a container does not need to restart the whole OS, it just needs to restart the application itself.

Microservices architecture

Containers bring application deployment to a whole new level by introducing microservices architecture. What it essentially means is that, if you have a monolith or N-tier application, it usually has many different services communicating with each other. Containerizing your services allows you to break down your application into multiple pieces and work with each of them independently. Let's say you have a standard application that consists of a web server, application, and database. You can probably put it on one or three different servers, three different VMs, or three simple containers, running each part of this application. All these options require a different amount of effort, time, and resources. Later in this book, you will see how simple it is to do using containers.

The stateless nature of containers

Containers are stateless, which means that you can bring containers up and down, create or destroy them at any time, and this will not affect your application performance. That is one of the greatest features of containers. We are going to delve into this later in this book.