Book Image

Edge Computing Systems with Kubernetes

By : Sergio Méndez
Book Image

Edge Computing Systems with Kubernetes

By: Sergio Méndez

Overview of this book

Edge computing is a way of processing information near the source of data instead of processing it on data centers in the cloud. In this way, edge computing can reduce latency when data is processed, improving the user experience on real-time data visualization for your applications. Using K3s, a light-weight Kubernetes and k3OS, a K3s-based Linux distribution along with other open source cloud native technologies, you can build reliable edge computing systems without spending a lot of money. In this book, you will learn how to design edge computing systems with containers and edge devices using sensors, GPS modules, WiFi, LoRa communication and so on. You will also get to grips with different use cases and examples covered in this book, how to solve common use cases for edge computing such as updating your applications using GitOps, reading data from sensors and storing it on SQL and NoSQL databases. Later chapters will show you how to connect hardware to your edge clusters, predict using machine learning, and analyze images with computer vision. All the examples and use cases in this book are designed to run on devices using 64-bit ARM processors, using Raspberry Pi devices as an example. By the end of this book, you will be able to use the content of these chapters as small pieces to create your own edge computing system.
Table of Contents (21 chapters)
1
Part 1: Edge Computing Basics
7
Part 2: Cloud Native Applications at the Edge
13
Part 3: Edge Computing Use Cases in Practice

Using NGINX to expose your applications

It’s time to start using NGINX as your ingress controller. We are going to expose your first application using NGINX. To begin, let’s deploy a simple application. To do this, follow the given steps:

  1. Create a simple deployment using nginx image with the following command:
    $ kubectl create deploy myapp --image=nginx
  2. Create a ClusterIP service for the myapp deployment:
    $ kubectl expose deploy myapp --type=ClusterIP --port=80
  3. Create an Ingress using the domain 192.168.0.240.nip.io. In this example, we are assuming that the endpoint for the ingress is 192.168.0.240. This is the same IP as the load balancer created by the ingress controller. When you access your browser, the page https://192.168.0.241.nip.io is going to show the NGINX myapp Deployment, which you have already created. nip.io is a wildcard DNS for any IP address, so with this, you can get a free kind of domain to play with your ingress definitions. Let’...