Book Image

IoT Edge Computing with MicroK8s

By : Karthikeyan Shanmugam
Book Image

IoT Edge Computing with MicroK8s

By: Karthikeyan Shanmugam

Overview of this book

Are you facing challenges with developing, deploying, monitoring, clustering, storing, securing, and managing Kubernetes in production environments as you're not familiar with infrastructure technologies? MicroK8s - a zero-ops, lightweight, and CNCF-compliant Kubernetes with a small footprint is the apt solution for you. This book gets you up and running with production-grade, highly available (HA) Kubernetes clusters on MicroK8s using best practices and examples based on IoT and edge computing. Beginning with an introduction to Kubernetes, MicroK8s, and IoT and edge computing architectures, this book shows you how to install, deploy sample apps, and enable add-ons (like DNS and dashboard) on the MicroK8s platform. You’ll work with multi-node Kubernetes clusters on Raspberry Pi and networking plugins (such as Calico and Cilium) and implement service mesh, load balancing with MetalLB and Ingress, and AI/ML workloads on MicroK8s. You’ll also understand how to secure containers, monitor infrastructure and apps with Prometheus, Grafana, and the ELK stack, manage storage replication with OpenEBS, resist component failure using a HA cluster, and more, as well as take a sneak peek into future trends. By the end of this book, you’ll be able to use MicroK8 to build and implement scenarios for IoT and edge computing workloads in a production environment.
Table of Contents (24 chapters)
1
Part 1: Foundations of Kubernetes and MicroK8s
4
Part 2: Kubernetes as the Preferred Platform for IOT and Edge Computing
7
Part 3: Running Applications on MicroK8s
14
Part 4: Deploying and Managing Applications on MicroK8s
21
Frequently Asked Questions About MicroK8s

Scaling the application deployment

Changing the number of replicas in a Deployment allows scaling the deployments. When a Deployment is scaled out, new pods will be created and scheduled to Nodes with available resources. The number of pods will be scaled up to the new target state. Autoscaling pods is also supported by Kubernetes. It is also possible to scale to zero, which will terminate all pods in a given Deployment.

Running many instances of an application needs a method for distributing traffic among them. A built-in load balancer in a Services object distributes network traffic across all pods in an exposed Deployment. Endpoints will be used to continuously monitor the operating pods, ensuring that traffic is only directed to those that are available.

We'll use a new YAML file to increase the number of pods in the Deployment in the following example. The replicas are set to 4 in this YAML file, indicating that the Deployment should include four pods:

apiVersion...