Book Image

Edge Computing Systems with Kubernetes

By : Sergio Méndez
Book Image

Edge Computing Systems with Kubernetes

By: Sergio Méndez

Overview of this book

Edge computing is a way of processing information near the source of data instead of processing it on data centers in the cloud. In this way, edge computing can reduce latency when data is processed, improving the user experience on real-time data visualization for your applications. Using K3s, a light-weight Kubernetes and k3OS, a K3s-based Linux distribution along with other open source cloud native technologies, you can build reliable edge computing systems without spending a lot of money. In this book, you will learn how to design edge computing systems with containers and edge devices using sensors, GPS modules, WiFi, LoRa communication and so on. You will also get to grips with different use cases and examples covered in this book, how to solve common use cases for edge computing such as updating your applications using GitOps, reading data from sensors and storing it on SQL and NoSQL databases. Later chapters will show you how to connect hardware to your edge clusters, predict using machine learning, and analyze images with computer vision. All the examples and use cases in this book are designed to run on devices using 64-bit ARM processors, using Raspberry Pi devices as an example. By the end of this book, you will be able to use the content of these chapters as small pieces to create your own edge computing system.
Table of Contents (21 chapters)
1
Part 1: Edge Computing Basics
7
Part 2: Cloud Native Applications at the Edge
13
Part 3: Edge Computing Use Cases in Practice

Creating a volume to persist your data

Before we start deploying our databases, let’s create a volume to store data first. For this, we have two options. One is to use a directory inside the server. This means that in order to not lose data, your Pods have to be provisioned in the same node as where your volume was created the first time. If you don’t want to depend on which node your pods are running, you have to choose a second option, which is to use a storage driver. If that’s your case, we are going to use Longhorn as our storage driver option. Now, let’s create our storage first, using a local directory. For this, follow the next steps:

  1. Create a PersistentVolume using the /mnt/data directory in the node to store data:
    $ cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: db-pv-volume
      labels:
        type: local
    spec:
      storageClassName: manual
      ...