Book Image

Linux Kernel Programming Part 2 - Char Device Drivers and Kernel Synchronization

By : Kaiwan N Billimoria
Book Image

Linux Kernel Programming Part 2 - Char Device Drivers and Kernel Synchronization

By: Kaiwan N Billimoria

Overview of this book

Linux Kernel Programming Part 2 - Char Device Drivers and Kernel Synchronization is an ideal companion guide to the Linux Kernel Programming book. This book provides a comprehensive introduction for those new to Linux device driver development and will have you up and running with writing misc class character device driver code (on the 5.4 LTS Linux kernel) in next to no time. You'll begin by learning how to write a simple and complete misc class character driver before interfacing your driver with user-mode processes via procfs, sysfs, debugfs, netlink sockets, and ioctl. You'll then find out how to work with hardware I/O memory. The book covers working with hardware interrupts in depth and helps you understand interrupt request (IRQ) allocation, threaded IRQ handlers, tasklets, and softirqs. You'll also explore the practical usage of useful kernel mechanisms, setting up delays, timers, kernel threads, and workqueues. Finally, you'll discover how to deal with the complexity of kernel synchronization with locking technologies (mutexes, spinlocks, and atomic/refcount operators), including more advanced topics such as cache effects, a primer on lock-free techniques, deadlock avoidance (with lockdep), and kernel lock debugging techniques. By the end of this Linux kernel book, you'll have learned the fundamentals of writing Linux character device driver code for real-world projects and products.
Table of Contents (11 chapters)
Section 1: Character Device Driver Basics
User-Kernel Communication Pathways
Handling Hardware Interrupts
Working with Kernel Timers, Threads, and Workqueues
Section 2: Delving Deeper

Understanding the issue with direct access

Now, of course, this hardware memory on the chip, the so-called I/O memory, is not RAM. The Linux kernel refuses the module or driver author direct access to such hardware I/O memory locations. We already know why: on a modern VM-based OS, all memory access has to be via the Memory Management Unit (MMU) and paging tables.

Let's quickly summarize the key aspect of what was seen in the companion guide Linux Kernel Programming in Chapter 7, Memory Management Internals – Essentials: by default, memory is virtualized, which means that all addresses are virtual and not physical (this includes the addresses within the kernel segment or VAS). Think of it this way: once a virtual address is accessed by a process (or the kernel) for reading or writing or execution, the system has to fetch the memory content at the corresponding physical address. This involves translating the virtual address to the physical address at runtime...