Book Image

The Insider's Guide to Arm Cortex-M Development

By : Zachary Lasiuk, Pareena Verma, Jason Andrews
Book Image

The Insider's Guide to Arm Cortex-M Development

By: Zachary Lasiuk, Pareena Verma, Jason Andrews

Overview of this book

Cortex-M has been around since 2004, so why a new book now? With new microcontrollers based on the Cortex-M55 and Cortex-M85 being introduced this year, Cortex-M continues to expand. New software concepts, such as standardized software reuse, have emerged alongside new topics including security and machine learning. Development methodologies have also significantly advanced, with more embedded development taking place in the cloud and increased levels of automation. Due to these advances, a single engineer can no longer understand an entire project and requires new skills to be successful. This book provides a unique view of how to navigate and apply the latest concepts in microcontroller development. The book is split into two parts. First, you’ll be guided through how to select the ideal set of hardware, software, and tools for your specific project. Next, you’ll explore how to implement essential topics for modern embedded developers. Throughout the book, there are examples for you to learn by working with real Cortex-M devices with all software available on GitHub. You will gain experience with the small Cortex-M0+, the powerful Cortex-M55, and more Cortex-M processors. By the end of this book, you’ll be able to practically apply modern Cortex-M software development concepts.
Table of Contents (15 chapters)
Part 1: Get Set Up
Part 2: Sharpen Your Skills

Leveraging Machine Learning

ML applications have grown to dominate in highly visible and enterprise-scale uses today: Google search results, Facebook/Instagram/TikTok/Twitter sorting algorithms, YouTube’s suggested content, Alexa/Siri voice assistants, internet advertising, and more. These use cases all host their ML models and perform their inferences in the cloud, then show the results to us as end users on our edge devices such as phones, tablets, or smart speakers. This paradigm is beginning to change, with more models being stored (and inferences being run) on the edge devices themselves. The shift to processing at the edge removes the need for transmission to and storage in the cloud, and as a result, provides the benefits listed here:

  • Enhanced security: Reducing attack vectors
  • Enhanced privacy: Reducing the sharing of data
  • Enhanced performance: Reducing application latency

This chapter is intended to give an overview and practical guide for using...