Book Image

IoT Edge Computing with MicroK8s

By : Karthikeyan Shanmugam
Book Image

IoT Edge Computing with MicroK8s

By: Karthikeyan Shanmugam

Overview of this book

Are you facing challenges with developing, deploying, monitoring, clustering, storing, securing, and managing Kubernetes in production environments as you're not familiar with infrastructure technologies? MicroK8s - a zero-ops, lightweight, and CNCF-compliant Kubernetes with a small footprint is the apt solution for you. This book gets you up and running with production-grade, highly available (HA) Kubernetes clusters on MicroK8s using best practices and examples based on IoT and edge computing. Beginning with an introduction to Kubernetes, MicroK8s, and IoT and edge computing architectures, this book shows you how to install, deploy sample apps, and enable add-ons (like DNS and dashboard) on the MicroK8s platform. You’ll work with multi-node Kubernetes clusters on Raspberry Pi and networking plugins (such as Calico and Cilium) and implement service mesh, load balancing with MetalLB and Ingress, and AI/ML workloads on MicroK8s. You’ll also understand how to secure containers, monitor infrastructure and apps with Prometheus, Grafana, and the ELK stack, manage storage replication with OpenEBS, resist component failure using a HA cluster, and more, as well as take a sneak peek into future trends. By the end of this book, you’ll be able to use MicroK8 to build and implement scenarios for IoT and edge computing workloads in a production environment.
Table of Contents (24 chapters)
1
Part 1: Foundations of Kubernetes and MicroK8s
4
Part 2: Kubernetes as the Preferred Platform for IOT and Edge Computing
7
Part 3: Running Applications on MicroK8s
14
Part 4: Deploying and Managing Applications on MicroK8s
21
Frequently Asked Questions About MicroK8s

Communication flow from Pod 3 to Pod 6

Lets look at the communication flow from Pod3 to Pod6 which is housed in single node:

  1. A packet leaves from Pod 3 through the eth3 interface and reaches the cbr0 bridge interface through the veth1234 virtual interface.
  2. The packet leaves veth1234 and reaches cbr0, looking for the address of Pod 6.
  3. The packet leaves cbr0 and is redirected to veth5678.
  4. The packet leaves cbr0 through veth5678 and reaches the Pod 6 network through the eth6 interface.

On a regular basis, Kubernetes destroys and rebuilds Pods. As a result, Services that have a stable IP address and enable load balancing among a set of Pods must be used. The kube-proxy component residing in the node takes care of communication between Pods and Services.

The flow of traffic from a client Pod 3 to a server Pod 6 on a separate node is depicted in the following diagram. The Kubernetes application programming interface (API) server keeps track of the application...