Book Image

Linux for System Administrators

By : Viorel Rudareanu, Daniil Baturin
Book Image

Linux for System Administrators

By: Viorel Rudareanu, Daniil Baturin

Overview of this book

Linux system administration is an essential aspect of maintaining and managing Linux servers within an organization. The role of a Linux system administrator is pivotal in ensuring the smooth functioning and security of these servers, making it a critical job function for any company that relies on Linux infrastructure. This book is a comprehensive guide designed to help you build a solid foundation in Linux system administration. It takes you from the fundamentals of Linux to more advanced topics, encompassing key areas such as Linux system installation, managing user accounts and filesystems, networking fundamentals, and Linux security techniques. Additionally, the book delves into the automation of applications and infrastructure using Chef, enabling you to streamline and optimize your operations. For both newcomers getting started with Linux and professionals looking to enhance their skills, this book is an invaluable hands-on guide with a structured approach and concise explanations that make it an effective resource for quickly acquiring and reinforcing Linux system administration skills. With the help of this Linux book, you’ll be able to navigate the world of Linux administration confidently to meet the demands of your role.
Table of Contents (21 chapters)
1
Part 1: Linux Basics
7
Part 2: Configuring and Modifying Linux Systems
13
Part 3: Linux as a Part of a Larger System

The structure of a Linux system

Linux and its multiple distributions often seem complicated for beginners. To clarify this, let’s examine the structure and evolution of operating systems in general.

The Linux kernel and Linux-based operating systems

When people say Linux, they may mean different things. In the narrow sense, Linux is an operating system kernel that was created in the early 90s by Linus Torvalds and is now developed and maintained by a large international community. However, when people say they are using Linux, they usually mean a family of operating systems that use that kernel and usually (but not always) a set of system libraries and utilities created by the GNU project, which is why some insist that such systems should be referred to as GNU/Linux instead.

Note

The GNU project is a free software project that was launched in 1983 by Richard Stallman. His goal was to create a complete Unix-like operating system composed entirely of free software. GNU stands for GNU’s Not Unix, which reflects the project’s goal of creating a free software alternative to the proprietary Unix operating system.

To fully understand how that unusual situation became possible, let’s briefly discuss the history of operating systems.

Kernel versus user space

The earliest computers had very low computational power, so they would only have one program in their memory at a time, and that program had complete control over the hardware. As computing power increased, it became feasible to have multiple users use the same computer at the same time and run multiple programs – an idea known as time-sharing or multitasking. Shared computers would run a program known as a supervisor that would allocate resources to end user programs. A set of supervisor programs and system utilities became known as an operating system. The earliest time-sharing systems used cooperative multitasking, where programs were expected to transfer control back to the supervisor on their own. However, if a programming mistake made a program run into an endless loop or write data to a wrong memory address, such a program could cause the entire computer to hang or corrupt the memory of another program, including the supervisor.

To make multitasking more reliable, newer generations of hardware introduced protection mechanisms that allowed a supervisor program to take control of the CPU back from end user programs and forcibly terminate programs that tried to write something to memory that belonged to other programs or the supervisor itself.

That brought a separation between the operating system kernel and user space programs. End user programs physically couldn’t control the hardware directly anymore, and neither could they access memory that wasn’t explicitly allocated to them. Those privileges were reserved for the kernel – the code that includes a process scheduler (serving the same purpose as old supervisor programs) and device drivers.

Inside a single program, programmers are free to organize their code as they see fit. However, when multiple independently developed components need to work together, there needs to be a well-defined interface between them. Since no one writes directly in machine code anymore, for modern systems, this means two interfaces: the Application Programming Interface (API) and the Application Binary Interface (ABI). The API is for programmers who write source code and define function names they can call and parameter lists for those functions. After compilation, such function calls are translated into executable code that loads parameters into the correct places in memory and transfers control to the code to be called – where to load those parameters and how to transfer control is defined by the ABI.

Interfaces between user space programs and libraries are heavily influenced by the programming language they are written in.

On the contrary, interfaces between kernels and user space programs look more similar to hardware interfaces. They are completely independent of the programming language and use software interrupts or dedicated system call CPU instructions rather than the function calls familiar to application programmers.

Note

A system call in Linux is a mechanism that allows user-level processes to request services from the kernel, which is the core of the operating system. These services can access hardware devices, manage processes and threads, allocate memory, and perform other low-level tasks that require privileged access.

Those interfaces are also very low-level: for example, if you want to use the write() system call to print a string to standard output, you must always specify how many bytes to write – it has no concept of a string variable or a convention for determining its length.

For this reason, operating systems include standard libraries for one or more programming languages, which provide an abstraction layer and a stable API for end user programs.

Most operating systems have the kernel, the standard libraries for programming languages, and often the basic system utilities developed by a single group of people in close collaboration, and all those components are versioned and distributed together. In that case, the kernel interface is usually treated as purely internal and isn’t guaranteed to remain stable.

The Linux kernel and the GNU project

Linux is unique in that it was developed to provide a replacement kernel for an existing user space part of an operating system. Linus Torvalds, the founder of the project, originally developed it to improve the functionality of MINIX – an intentionally simplified Unix-like operating system meant for instruction rather than production use. He’s since been using the GNU C compiler and user space programs from the GNU project – the project that Richard Stallman started with the goal to create a complete Unix-like operating system that would be free (as in freedom) and open source, and thus available for everyone to use, improve, and redistribute.

At the time, the GNU project had all the user space parts of an operating system, but not a usable kernel. There were other open source Unix projects, but they were derived from the BSD Unix code base, and in the early 90s, they were targets of lawsuits for alleged copyright infringement. The Linux kernel came at a perfect time since Linus Torvalds and various contributors developed it completely independently and published it under the same license as the GNU project software – the GNU General Public License (GPL). Due to this, a set of GNU software packages, plus the Linux kernel, became a possible basis for a completely open source operating system.

However, Linus Torvalds wasn’t a GNU project member, and the Linux kernel remained independent from the Free Software Foundation (FSF) – it just used a license that the FSF developed for the GNU project, but that any other person could also use, and many did.

Thus, to keep new Linux kernel versions useful together with the GNU C library and software that relied on that library, developers had to keep the kernel interface stable.

The GNU C library wasn’t developed to work with a specific kernel either – when that project started, there wasn’t a working GNU kernel, and GNU software was usually run on other Unix-like operating systems.

As a result, both Linux and the GNU software can be and still are used together and in different combinations. The GNU user space software set can also be used with the still-experimental GNU hard kernel, and other operating systems use it as system or add-on software. For example, Apple macOS used GNU Bash as its system shell for a long time, until it was replaced by zsh.

The stability guarantees of the Linux kernel interface make it attractive to use as a basis for custom operating systems that may be nothing like Unix – some of them just have a single program run on top of the kernel. People have also created alternative standard libraries for different programming languages to use with Linux, such as Musl and Bionic for the C programming language, which use more permissive licenses and facilitate static linking. But to understand those licensing differences, we need to discuss the concept of software licenses.