Book Image

IoT Edge Computing with MicroK8s

By : Karthikeyan Shanmugam
Book Image

IoT Edge Computing with MicroK8s

By: Karthikeyan Shanmugam

Overview of this book

Are you facing challenges with developing, deploying, monitoring, clustering, storing, securing, and managing Kubernetes in production environments as you're not familiar with infrastructure technologies? MicroK8s - a zero-ops, lightweight, and CNCF-compliant Kubernetes with a small footprint is the apt solution for you. This book gets you up and running with production-grade, highly available (HA) Kubernetes clusters on MicroK8s using best practices and examples based on IoT and edge computing. Beginning with an introduction to Kubernetes, MicroK8s, and IoT and edge computing architectures, this book shows you how to install, deploy sample apps, and enable add-ons (like DNS and dashboard) on the MicroK8s platform. You’ll work with multi-node Kubernetes clusters on Raspberry Pi and networking plugins (such as Calico and Cilium) and implement service mesh, load balancing with MetalLB and Ingress, and AI/ML workloads on MicroK8s. You’ll also understand how to secure containers, monitor infrastructure and apps with Prometheus, Grafana, and the ELK stack, manage storage replication with OpenEBS, resist component failure using a HA cluster, and more, as well as take a sneak peek into future trends. By the end of this book, you’ll be able to use MicroK8 to build and implement scenarios for IoT and edge computing workloads in a production environment.
Table of Contents (24 chapters)
1
Part 1: Foundations of Kubernetes and MicroK8s
4
Part 2: Kubernetes as the Preferred Platform for IOT and Edge Computing
7
Part 3: Running Applications on MicroK8s
14
Part 4: Deploying and Managing Applications on MicroK8s
21
Frequently Asked Questions About MicroK8s

Getting Started with Kubernetes

Kubernetes is an open source container orchestration engine that automates how container applications are deployed, scaled, and managed. Since it was first released 7 years ago, it has made great strides in a short period. It has previously had to compete with and outperform container orchestration engines such as Cloud Foundry Diego, CoreOS's Fleet, Docker Swarm, Kontena, HashiCorp's Nomad, Apache Mesos, Rancher's Cattle, Amazon ECS, and more. Kubernetes is now operating in an entirely different landscape. This indicates that developers only need to master one container orchestration engine so that they can be employed for 90% of container-related jobs.

The Kubernetes container orchestration framework is a ready-for-production open source platform built on Google's 15+ years of experience running production workloads, as well as community-contributed best-of-breed principles and concepts. Kubernetes divides an application's containers into logical units for easier administration and discovery. Containers (cgroups) have been around since early 2007 when they were first included in the mainline Linux kernel. A container's small size and portability allows it to host an exponentially higher number of containers than VMs, lowering infrastructure costs and allowing more programs to be deployed faster. However, until Docker (2013) came along, it didn't generate significant interest due to usability concerns.

Docker is different from standard virtualization; it is based on operating-system-level virtualization. Containers, unlike hypervisor virtualization, which uses an intermediation layer (hypervisor) to run virtual machines on physical hardware, run in user space on top of the kernel of an operating system. As a result, they're incredibly light and fast. This can be seen in the following diagram:

Figure 1.1 – Virtual machines versus containers

Figure 1.1 – Virtual machines versus containers

The Kubernetes container orchestration framework automates much of the operational effort that's necessary to run containerized workloads and services. This covers provisioning, deployment, scaling (up and down), networking, load balancing, and other tasks that software teams must perform to manage a container's life cycle. Some of the key benefits that Kubernetes brings to developers are as follows:

  • Declarative Application Topology: This describes how each service should be implemented, as well as their reliance on other services and resource requirements. Because we have all of this data in an executable format, we can test the application's deployment parts early on in development and treat it like programmable application infrastructure:
Figure 1.2 – Declarative application topology

Figure 1.2 – Declarative application topology

  • Declarative Service Deployments: The update and rollback process for a set of containers is encapsulated, making it a repetitive and automated procedure.
  • Dynamically Placed Applications: This allows applications to be deployed in a predictable sequence on the cluster, based on application requirements, resources available, and governing policies.
  • Flexible scheduler: There is a lot of flexibility in terms of defining conditions for assigning pods to a specific or a set of worker nodes that meet those conditions.
  • Application Resilience: Containers and management platforms help applications be more robust in a variety of ways, as follows:
    • Resource consumption policies such as CPU and memory quotas
    • Handling the failures using a circuit breaker, timeout, retry, and so on
    • Failover and service discovery
    • Autoscaling and self-healing
  • Self-Service Environments: These allow teams and individuals to create secluded environments for CI/CD, experimentation, and testing purposes from the cluster in real time.
  • Service Discovery, Load Balancing, and Circuit Breaker: Without the use of application agents, services can discover and consume other services. There's more to this than what is listed here.

In this chapter, we're going to cover the following main topics:

  • The evolution of containers
  • Kubernetes overview – understanding Kubernetes components
  • Understanding pods
  • Understanding deployments
  • Understanding StatefulSets and DaemonSets
  • Understanding lobs and CronJobs
  • Understanding services