Book Image

TinyML Cookbook

By : Gian Marco Iodice
Book Image

TinyML Cookbook

By: Gian Marco Iodice

Overview of this book

This book explores TinyML, a fast-growing field at the unique intersection of machine learning and embedded systems to make AI ubiquitous with extremely low-powered devices such as microcontrollers. The TinyML Cookbook starts with a practical introduction to this multidisciplinary field to get you up to speed with some of the fundamentals for deploying intelligent applications on Arduino Nano 33 BLE Sense and Raspberry Pi Pico. As you progress, you’ll tackle various problems that you may encounter while prototyping microcontrollers, such as controlling the LED state with GPIO and a push-button, supplying power to microcontrollers with batteries, and more. Next, you’ll cover recipes relating to temperature, humidity, and the three “V” sensors (Voice, Vision, and Vibration) to gain the necessary skills to implement end-to-end smart applications in different scenarios. Later, you’ll learn best practices for building tiny models for memory-constrained microcontrollers. Finally, you’ll explore two of the most recent technologies, microTVM and microNPU that will help you step up your TinyML game. By the end of this book, you’ll be well-versed with best practices and machine learning frameworks to develop ML apps easily on microcontrollers and have a clear understanding of the key aspects to consider during the development phase.
Table of Contents (10 chapters)

Preparing and testing the quantized TFLite model

As we know from Chapter 3, Building a Weather Station with TensorFlow Lite for Microcontrollers, the model requires quantization to 8 bits to run more efficiently on a microcontroller. However, how do we know if the model can fit into the Arduino Nano? Furthermore, how do we know if the quantized model preserves the accuracy of the floating-point variant?

These questions will be answered in this recipe, where we will show how to evaluate the program memory utilization and the accuracy of the quantized model generated by the TFLite converter. After analyzing the memory usage and accuracy validation, we will convert the TFLite model to a C-byte array.

The following Colab notebook (the Preparing and testing the quantized TFLite model section) contains the code referred to in this recipe:

  • prepare_model.ipynb:

https://github.com/PacktPublishing/TinyML-Cookbook/blob/main/Chapter05/ColabNotebooks/prepare_model.ipynb

...