Book Image

Linux Kernel Programming - Second Edition

By : Kaiwan N. Billimoria
Book Image

Linux Kernel Programming - Second Edition

By: Kaiwan N. Billimoria

Overview of this book

The 2nd Edition of Linux Kernel Programming is an updated, comprehensive guide for new programmers to the Linux kernel. This book uses the recent 6.1 Long-Term Support (LTS) Linux kernel series, which will be maintained until Dec 2026, and also delves into its many new features. Further, the Civil Infrastructure Project has pledged to maintain and support this 6.1 Super LTS (SLTS) kernel right until August 2033, keeping this book valid for years to come! You’ll begin this exciting journey by learning how to build the kernel from source. In a step by step manner, you will then learn how to write your first kernel module by leveraging the kernel’s powerful Loadable Kernel Module (LKM) framework. With this foundation, you will delve into key kernel internals topics including Linux kernel architecture, memory management, and CPU (task) scheduling. You’ll finish with understanding the deep issues of concurrency, and gain insight into how they can be addressed with various synchronization/locking technologies (e.g., mutexes, spinlocks, atomic/refcount operators, rw-spinlocks and even lock-free technologies such as per-CPU and RCU). By the end of this book, you’ll have a much better understanding of the fundamentals of writing the Linux kernel and kernel module code that can straight away be used in real-world projects and products.
Table of Contents (16 chapters)
14
Other Books You May Enjoy
15
Index

What this book covers

Chapter 1, Linux Kernel Programming – A Quick Introduction, briefs you about the exciting journey in the sections of the book, which cover everything from building the Linux kernel from source to understanding and working with complex topics such as synchronization within the kernel.

Chapter 2, Building the 6.x Linux Kernel from Source – Part 1, is the first part of explaining how to build the modern Linux kernel from scratch with its source code. In this part, you will first be given necessary background information – the kernel version nomenclature, the different source trees, and the layout of the kernel source. Next, you will be shown in detail how exactly to download a stable vanilla Linux kernel source tree onto your Linux VM (Virtual Machine). We shall then learn a little regarding the layout of the kernel source code, getting, in effect, a “10,000-foot view” of the kernel code base. The actual work of extracting and configuring the Linux kernel then follows. Creating and using a custom menu entry for kernel configuration is also explained in detail.

Chapter 3, Building the 6.x Linux Kernel from Source – Part 2, is the second part on performing kernel builds from source code. In this part, you will continue from the previous chapter, now actually building the kernel, installing kernel modules, understanding what exactly the initramfs (initrd) image is and how to generate it, and setting up the bootloader (for the x86_64). Also, as a valuable add-on, this chapter then explains how to cross-compile the kernel for a typical embedded ARM target (using the popular Raspberry Pi 4 64-bit as a target device). Several tips and tricks on kernel builds, and even kernel security (hardening) tips, are detailed.

Chapter 4, Writing Your First Kernel Module – Part 1, is the first of two parts that cover a fundamental aspect of Linux kernel development – the LKM framework and how it is to be understood and used by you, the “module user,” the kernel module or device driver programmer. It covers the basics of the Linux kernel architecture and then, in great detail, every step involved in writing a simple “Hello, world” kernel module, compiling, inserting, checking, and removing it from kernel space.

We also cover kernel logging via the ubiquitous printk API in detail. This edition also covers printk indexing and introduces the powerful dynamic debug feature.

Chapter 5, Writing Your First Kernel Module – Part 2, is the second part that covers the LKM framework. Here, we begin with something critical – learning how to use a “better” Makefile, which will help you generate more robust code (this so-called ‘better’ Makefile helps by having several targets for code-checking, code-style correction, static analysis, and so on. This edition has a superior version of it). We then show in detail the steps to successfully cross-compile a kernel module for an alternate architecture, how to emulate “library-like” code in the kernel (via both the linking and module-stacking approaches), and how to pass parameters to your kernel module. Additional topics include how to perform auto-loading of modules at boot, important security guidelines, and some information on the kernel coding style and upstream contribution. Several example kernel modules make the learning more interesting.

Chapter 6, Kernel Internals Essentials – Processes and Threads, delves into some essential kernel internals topics. We begin with what is meant by the execution of kernel code in process and interrupt contexts, and minimal but required coverage of the process user virtual address space (VAS) layout. This sets the stage for you; you’ll then learn about Linux kernel architecture in more depth, focusing on the organization of process/thread task structures and their corresponding stacks – user- and kernel-mode. We then show you more on the kernel task structure (a “root” data structure), how to practically glean information from it (via the powerful ‘current’ macro), and even how to iterate over various (task) lists (there’s sample code too!). Several kernel modules make the topic come alive.

Chapter 7, Memory Management Internals – Essentials, a key chapter, delves into essential internals of the Linux memory management subsystem, to the level of detail required for the typical module author or driver developer. This coverage is thus necessarily more theoretical in nature; nevertheless, the knowledge gained here is crucial to you, the kernel developer, both for deep understanding and usage of appropriate kernel memory APIs, as well as for performing meaningful debugging at the level of the kernel. We cover the VM split (and how it’s defined on various actual architectures), gaining deep insight into the user VAS as well as the kernel VAS (our procmap utility will prove to be an eye-opener here!). We also cover more on how address translation works. We then briefly delve into the security technique of memory layout randomization ([K]ASLR), and end this chapter with a discussion on physical memory organization within Linux.

Chapter 8, Kernel Memory Allocation for Module Authors –Part 1, gets our hands dirty with the kernel memory allocation (and, obviously, deallocation) APIs. You will first learn about the two allocation “layers” within Linux – the slab allocator that’s layered above the kernel memory allocation “engine,” the page allocator (or BSA). We shall briefly learn about the underpinnings of the page allocator algorithm and its “freelist” data structure; this information is valuable when deciding which layer to use. Next, we dive straight into the hands-on work of learning about the usage of these key APIs. The ideas behind the slab allocator (or slab cache) and the primary kernel allocator APIs – the kzalloc()/kfree() pair (and friends) – are covered. Importantly, the size limitations, downsides, and caveats when using these common APIs are covered in a lot of detail as well.

Also, especially useful for driver authors, we cover the kernel’s modern resource-managed memory allocation APIs (the devm_*() routines). Finding where internal fragmentation (wastage) occurs is another interesting area we delve into.

Chapter 9, Kernel Memory Allocation for Module Authors– Part 2, goes further, in a logical fashion, from the previous chapter. Here, you will learn how to create custom slab caches (useful for high-frequency (de)allocations for, say, your custom driver). Next, you’ll learn how to extract useful information regarding slab caches as well as understanding slab shrinkers (new in this edition). We then move onto understanding and using the vmalloc() API (and friends). Very importantly, having covered many APIs for kernel memory (de)allocation, you will now learn how to pick and choose an appropriate API given the real-world situation you find yourself in. This chapter is rounded off with important coverage of the kernel’s memory reclamation technologies and the dreaded Out Of Memory (OOM) “killer” framework. Understanding OOM and related areas will also lead to a much deeper understanding of how user space memory allocation really works, via the demand paging technique. This edition has more and better coverage of kernel page reclaim, as well as the new MGLRU and DAMON technologies.

Chapter 10, The CPU Scheduler – Part 1, the first of two chapters on this topic, covers a useful mix of theory and practice regarding CPU (task) scheduling on the Linux OS. The minimal necessary theoretical background on what the KSE (kernel schedulable entity) is – it’s the thread! – and available kernel scheduling policies, are some of the initially covered topics. Next, we cover how to visualize the flow of a thread via tools like perf. Sufficient kernel internal details on CPU scheduling are then covered to have you understand how task scheduling on the modern Linux OS works. Along the way, you will learn about thread scheduling attributes (policy and real-time priority) as well. This edition includes new coverage on CFS scheduling periods/timeslices, enhanced coverage on exactly how and when the core scheduler code is invoked, and coverage of the new preempt dynamic feature.

Chapter 11, The CPU Scheduler – Part 2, the second part on CPU (task) scheduling, continues to cover the topic in more depth. Here, you learn about the CPU affinity mask and how to query/set it, controlling scheduling policy and priority on a per-thread basis – such powerful features! We then come to a key and very powerful Linux OS feature – control groups (cgroups). We understand this feature along with learning how to practically explore it (a custom script is also built). We further learn the role the modern systemd framework plays with cgroups. An interesting example on controlling CPU bandwidth allocation via cgroups v2 is then seen, from different angles. Can you run Linux as an RTOS? Indeed you can! We cover an introduction to this other interesting area…

Chapter 12, Kernel Synchronization – Part 1, first covers the really key concepts regarding critical sections, atomicity, data races (from the LKMM point of view), what a lock conceptually achieves, and, very importantly, the ‘why’ of all this. We then cover concurrency concerns when working within the Linux kernel; this moves us naturally on to important locking guidelines, what deadlock means, and key approaches to preventing deadlock. Two of the most popular kernel locking technologies – the mutex lock and the spinlock – are then discussed in depth along with several (simple device driver based) code examples. We also point out how to work with spinlocks in interrupt contexts and close the chapter with common locking mistakes (to avoid) and deadlock-avoidance guildelines.

Chapter 13, Kernel Synchronization – Part 2, continues the journey on kernel synchronization. Here, you’ll learn about key locking optimizations – using lightweight atomic and (the more recent) refcount operators to safely operate on integers, RMW bit operators to safely perform bit ops, and the usage of the reader-writer spinlock over the regular one. What exactly the CPU caches are and the inherent risks involved when using them, such as cache “false sharing,” are then discussed. We then get into another key topic – lock-free programming techniques with an emphasis on per-CPU data and Read Copy Update (RCU) lock-free technologies. Several module examples illustrate the concepts! A critical topic – lock debugging techniques, including the usage of the kernel’s powerful “lockdep” lock validator – is then covered. The chapter is rounded off with a brief look at locking statistics and memory barriers.

Online Chapter – Kernel Workspace Setup

The online chapter on Kernel Workspace Setup, published online, guides you on setting up a full-fledged Linux kernel development workspace (typically, as a fully virtualized guest system). You will learn how to install all required software packages on it. (In this edition, we even provide an Ubuntu-based helper script that auto-installs all required packages.) You will also learn about several other open-source projects that will be useful on your journey to becoming a professional kernel/driver developer. Once this chapter is done, you will be ready to build a Linux kernel as well as to start writing and testing kernel code (via the loadable kernel module framework). In our view, it’s very important for you to actually use this book in a hands-on fashion, trying out and experimenting with code. The best way to learn something is to do so empirically – not taking anyone’s word on anything at all, but by trying it out and experiencing it for yourself. This chapter has been published online; do read it, here:

You can read more about the chapter online using the following link: http://www.packtpub.com/sites/default/files/downloads/9781803232225_Online_Chapter.pdf.

To get the most out of this book

To get the most out of this book, we expect the following:

  • You need to know your way around a Linux system, on the command line (the shell).
  • You need to know the C programming language.
  • It’s not mandatory, but experience with Linux system programming concepts and technologies will greatly help.

The details on hardware and software requirements, as well as their installation, are covered completely and in depth in Online Chapter, Kernel Workspace Setup. It’s critical that you read it in detail and follow the instructions therein.

Also, we have tested all the code in this book (it has its own GitHub repository) on these platforms:

  • x86_64 Ubuntu 22.04 LTS and guest OS (running on Oracle VirtualBox 7.0)
  • x86_64 Ubuntu 23.04 LTS guest OS (running on Oracle VirtualBox 7.0)
  • x86_64 Fedora 38 (and 39) on a native (laptop) system
  • ARM Raspberry Pi 4 Model B (64-bit, running both its “distro” kernel as well as our custom 6.1 kernel); lightly tested

We assume that, when running Linux as a guest (VM), the host system is either Windows 10 or later (of course, even Windows 7 will work), a recent Linux distribution (for example, Ubuntu or Fedora), or even macOS.

If you are using the digital version of this book, we advise you to type the code yourself or, much better, access the code via the GitHub repository (link available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

I strongly recommend that you follow the empirical approach: not taking anyone’s word on anything at all, but trying it out and experiencing it for yourself. Hence, this book gives you many hands-on experiments and kernel code examples that you can and must try out yourself; this will greatly aid you in making real progress, deepening your understanding of the various aspects of Linux kernel development.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Linux-Kernel-Programming_2E. If there’s an update to the code, it will be updated on the existing GitHub repository. (So be sure to regularly do a “git pull” as well to stay up to date.)

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://packt.link/gbp/9781803232225.

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “The ioremap() API returns a KVA of the void * type (since it’s an address location).”

A block of code is set as follows:

static int __init miscdrv_init(void)
{
    int ret;
    struct device *dev;

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

#if LINUX_VERSION_CODE < KERNEL_VERSION(5, 8, 0)
    vrx = __vmalloc(42 * PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL_RO);
    if (!vrx) {
        pr_warn("__vmalloc failed\n");
        goto err_out5;
    }
[ … ]

Any command-line input or output is written as follows:

pi@raspberrypi:~ $ sudo cat /proc/iomem

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in the text like this. Here is an example: “Select System info from the Administration panel.”

Warnings or important notes appear like this.

Tips and tricks appear like this.