Book Image

Linux Kernel Programming

By : Kaiwan N. Billimoria
Book Image

Linux Kernel Programming

By: Kaiwan N. Billimoria

Overview of this book

Linux Kernel Programming is a comprehensive introduction for those new to Linux kernel and module development. This easy-to-follow guide will have you up and running with writing kernel code in next-to-no time. This book uses the latest 5.4 Long-Term Support (LTS) Linux kernel, which will be maintained from November 2019 through to December 2025. By working with the 5.4 LTS kernel throughout the book, you can be confident that your knowledge will continue to be valid for years to come. You’ll start the journey by learning how to build the kernel from the source. Next, you’ll write your first kernel module using the powerful Loadable Kernel Module (LKM) framework. The following chapters will cover key kernel internals topics including Linux kernel architecture, memory management, and CPU scheduling. During the course of this book, you’ll delve into the fairly complex topic of concurrency within the kernel, understand the issues it can cause, and learn how they can be addressed with various locking technologies (mutexes, spinlocks, atomic, and refcount operators). You’ll also benefit from more advanced material on cache effects, a primer on lock-free techniques within the kernel, deadlock avoidance (with lockdep), and kernel lock debugging techniques. By the end of this kernel book, you’ll have a detailed understanding of the fundamentals of writing Linux kernel module code for real-world projects and products.
Table of Contents (19 chapters)
1
Section 1: The Basics
6
Writing Your First Kernel Module - LKMs Part 2
7
Section 2: Understanding and Working with the Kernel
10
Kernel Memory Allocation for Module Authors - Part 1
11
Kernel Memory Allocation for Module Authors - Part 2
14
Section 3: Delving Deeper
17
About Packt

Reclaiming memory – a kernel housekeeping task and OOM

As you will be aware, the kernel tries, for optimal performance, to keep the working set of memory pages as high up as possible in the memory pyramid (or hierarchy).

The so-called memory pyramid (or memory hierarchy) on a system consists of (in order, from smallest size but fastest speed to largest size but slowest): CPU registers, CPU caches (LI, L2, L3, ...), RAM, and swap (raw disk/flash/SSD partition). In our following discussion, we ignore CPU registers as their size is minuscule.

So, the processor uses its hardware caches (L1, L2, and so on) to hold the working set of pages. But of course, CPU cache memory is very limited, thus it will soon run out, causing the memory to spill over into the next hierarchical level – RAM. On modern systems, even many embedded ones, there's quite a bit of RAM; still, if and when the OS does run low on RAM, it spills over the memory pages that can no longer fit in RAM...