Book Image

Linux Kernel Programming

By : Kaiwan N. Billimoria
Book Image

Linux Kernel Programming

By: Kaiwan N. Billimoria

Overview of this book

Linux Kernel Programming is a comprehensive introduction for those new to Linux kernel and module development. This easy-to-follow guide will have you up and running with writing kernel code in next-to-no time. This book uses the latest 5.4 Long-Term Support (LTS) Linux kernel, which will be maintained from November 2019 through to December 2025. By working with the 5.4 LTS kernel throughout the book, you can be confident that your knowledge will continue to be valid for years to come. You’ll start the journey by learning how to build the kernel from the source. Next, you’ll write your first kernel module using the powerful Loadable Kernel Module (LKM) framework. The following chapters will cover key kernel internals topics including Linux kernel architecture, memory management, and CPU scheduling. During the course of this book, you’ll delve into the fairly complex topic of concurrency within the kernel, understand the issues it can cause, and learn how they can be addressed with various locking technologies (mutexes, spinlocks, atomic, and refcount operators). You’ll also benefit from more advanced material on cache effects, a primer on lock-free techniques within the kernel, deadlock avoidance (with lockdep), and kernel lock debugging techniques. By the end of this kernel book, you’ll have a detailed understanding of the fundamentals of writing Linux kernel module code for real-world projects and products.
Table of Contents (19 chapters)
1
Section 1: The Basics
6
Writing Your First Kernel Module - LKMs Part 2
7
Section 2: Understanding and Working with the Kernel
10
Kernel Memory Allocation for Module Authors - Part 1
11
Kernel Memory Allocation for Module Authors - Part 2
14
Section 3: Delving Deeper
17
About Packt

Preemptible kernels, blocking I/O, and data races

Imagine you're running your kernel module or driver on a Linux kernel that's been configured to be preemptible (that is, CONFIG_PREEMPT is on; we covered this topic in Chapter 10The CPU Scheduler Part 1). Consider that a process, P1, is running the driver's read method code in the process context, working on the global array. Now, while it's within the critical section (between time t2 and t3), what if the kernel preempts process P1 and context switches to another process, P2, which is just waiting to execute this very code path? It's dangerous, and again, a data race. This could well happen on even a UP system!

Another scenario that's somewhat similar (and again, could occur on either a single core (UP) or multicore system): process P1 is running through the critical section of the driver method (between time t2 and t3; again, see Figure 12.5). This...