Book Image

Becoming KCNA Certified

By : Dmitry Galkin
Book Image

Becoming KCNA Certified

By: Dmitry Galkin

Overview of this book

The job market related to the cloud and cloud-native technologies is both growing and becoming increasingly competitive, making certifications like KCNA a great way to stand out from the crowd and learn about the latest advancements in cloud technologies. Becoming KCNA Certified doesn't just give you the practical skills needed to deploy and connect applications in Kubernetes, but it also prepares you to pass the Kubernetes and Cloud Native Associate (KCNA) exam on your first attempt. The book starts by introducing you to cloud-native computing, containers, and Kubernetes through practical examples, allowing you to test the theory out for yourself. You'll learn how to configure and provide storage for your Kubernetes-managed applications and explore the principles of modern cloud-native architecture and application delivery, giving you a well-rounded view of the subject. Once you've been through the theoretical and practical aspects of the book, you'll get the chance to test what you’ve learnt with two mock exams, with explanations of the answers, so you'll be well-prepared to appear for the KCNA exam. By the end of this Kubernetes book, you'll have everything you need to pass the KCNA exam and forge a career in Kubernetes and cloud-native computing.
Table of Contents (22 chapters)
1
Part 1: The Cloud Era
4
Part 2: Performing Container Orchestration
7
Part 3: Learning Kubernetes Fundamentals
12
Part 4: Exploring Cloud Native
16
Part 5: KCNA Exam and Next Steps

Monolithic versus microservices applications

To understand the difference between monolithic and microservice-based applications, let us reflect on a real-world example. Imagine that a company runs an online hotel booking business. All reservations are made and paid for by the customers via a corporate web service.

The traditional monolithic architecture for this kind of web application would have bundled all the functionality into one single, complex software that might have included the following:

  • Customer dashboard
  • Customer identity and access management
  • Search engine for hotels based on criteria
  • Billing and integration with payment providers
  • Reservation system for hotels
  • Ticketing and support chat

A monolithic application will be tightly coupled (bundled) with all the business and user logic and must be developed and updated at once. That means if a change to a billing code has to be made, the entire application will have to be updated with the changes. After that, it should be carefully tested and released to the production environment. Each (even a small) change could potentially break the whole application and impact business by making it unavailable for a longer time.

With a microservices architecture, this very same application could be split into several smaller pieces communicating with each other over the network and fulfilling its own purpose. Billing, for example, can be performed by four smaller services:

  • Currency converter
  • Credit card provider integration
  • Bank wire transfer processing
  • Refund processing

Essentially, microservices are a group of small applications where each is responsible for its own small task. These small applications communicate with each other over the network and work together as a part of a larger application.

The following figure demonstrates the differences between monolithic and microservice architectures:

Figure 1.4 – Comparison of monolithic and microservice architectures

Figure 1.4 – Comparison of monolithic and microservice architectures

This way, all other parts of the web application can also be split into multiple smaller independent applications (microservices) communicating over the network. The advantages of this approach include the following:

  • Each microservice can be developed by its own team
  • Each microservice can be released and updated separately
  • Each microservice can be deployed and scaled independently of others
  • A single microservice outage will only impact a small part of the overall functionality of the app

Microservices are an important part of cloud-native architectures, and we will review in detail the benefits as well as the challenges associated with microservices in Chapter 9, Understanding Cloud Native Architectures. For the moment, let’s get back to containers and why they need to be orchestrated.

When each microservice is packaged into a container, the total number of containers can easily reach tens or even hundreds for especially large and complex applications. In such a complex distributed environment, things can quickly get out of our control.

A container orchestration system is what helps us to keep control over a large number of containers. It simplifies the management of containers by grouping application containers into deployments and automating operations such as the following:

  • Scaling microservices depending on the workload
  • Releasing new versions of microservices and their updates
  • Scheduling containers based on host utilizations and requirements
  • Automatically restarting containers that fail or failing over the traffic

As of today, there are many container and workload orchestration systems available, including these:

  • Kubernetes
  • OpenShift (also known as Open Kubernetes Distribution (OKD))
  • Hashicorp Nomad
  • Docker Swarm
  • Apache Mesos

As you already know from the book title, we will only focus on Kubernetes and there won’t be any sort of comparison made between these five. In fact, Kubernetes has overwhelmingly higher market shares and over the years, has become the de facto platform for orchestrating containers. With a high degree of confidence, you can concentrate on learning about Kubernetes and forget about the others, at least for the moment.