Book Image

TLS Cryptography In-Depth

By : Dr. Paul Duplys, Dr. Roland Schmitz
Book Image

TLS Cryptography In-Depth

By: Dr. Paul Duplys, Dr. Roland Schmitz

Overview of this book

TLS is the most widely used cryptographic protocol today, enabling e-commerce, online banking, and secure online communication. Written by Dr. Paul Duplys, Security, Privacy & Safety Research Lead at Bosch, and Dr. Roland Schmitz, Internet Security Professor at Stuttgart Media University, this book will help you gain a deep understanding of how and why TLS works, how past attacks on TLS were possible, and how vulnerabilities that enabled them were addressed in the latest TLS version 1.3. By exploring the inner workings of TLS, you’ll be able to configure it and use it more securely. Starting with the basic concepts, you’ll be led step by step through the world of modern cryptography, guided by the TLS protocol. As you advance, you’ll be learning about the necessary mathematical concepts from scratch. Topics such as public-key cryptography based on elliptic curves will be explained with a view on real-world applications in TLS. With easy-to-understand concepts, you’ll find out how secret keys are generated and exchanged in TLS, and how they are used to creating a secure channel between a client and a server. By the end of this book, you’ll have the knowledge to configure TLS servers securely. Moreover, you’ll have gained a deep knowledge of the cryptographic primitives that make up TLS.
Table of Contents (30 chapters)
1
Part I Getting Started
8
Part II Shaking Hands
16
Part III Off the Record
22
Part IV Bleeding Hearts and Biting Poodles
27
Bibliography
28
Index

1.4 Increasing complexity

While it can be argued that the problem of increasing complexity is not directly mitigated by modern cryptography (in fact, many crypto-related products and standards suffer from this problem themselves), there is no doubt that increasing complexity is in fact a major cause of security problems. We included the complexity problem in our list of crucial factors for the development of cryptography, because cryptography can help limit the damage caused by attacks that were in turn caused by excessive complexity.

Following Moore’s law [191], a prediction made by the co-founder of Fairchild Semiconductor and Intel Gordon Moore in 1965, the number of transistors in an integrated circuit, particularly in a microprocessor, kept doubling roughly every 2 years (see Figure 1.5).

Figure 1.5: Increasing complexity of hardware: Transistors. Data is taken from https://github.com/barentsen/tech-progress-data

Figure 1.5: Increasing complexity of hardware: Transistors. Data is taken from https://github.com/barentsen/tech-progress-data

Semiconductor manufacturers were able to build ever bigger and ever more complex hardware with ever more features. This went so far that in the late 1990s, the Semiconductor Industry Association set off an alarm in the industry when it warned that productivity gains in Integrated Circuit (IC) manufacturing were growing faster than the capabilities of Electronic Design Automation (EDA) tools used for IC design. Entire companies in the EDA area were successfully built on this premise.

Continuously growing hardware resources paved the way for ever more complex software with ever more functionality. Operating systems became ever more powerful and feature-rich, the number of layers in software stacks kept increasing, and software libraries and frameworks used by programmers became ever more comprehensive. As predicted by a series of software evolution laws formulated by early-day computer scientists Manny Lehman and Les Belady, software exhibited continuing growth and increasing complexity [181] (see also Figure 1.6).

Figure 1.6: Increasing complexity of software: Source Lines of Code (SLOC) in operating systems. Data is taken from https://github.com/barentsen/tech-progress-data

Figure 1.6: Increasing complexity of software: Source Lines of Code (SLOC) in operating systems. Data is taken from https://github.com/barentsen/tech-progress-data

Why should increasing complexity be a problem? According to leading cybersecurity experts Bruce Schneier and Niels Ferguson [65], ”Complexity is the worst enemy of security, and it almost always comes in the form of features or options”.

While it might be argued whether complexity really is the worst enemy of security, it is certainly true that complex systems, whether realized in hardware or software, tend to be error-prone. Schneier and Ferguson even claim that there are no complex systems that are secure.

Complexity negatively affects security in several ways, including the following:

  • Insufficient testability due to a combinatorial explosion given a large number of features

  • Unanticipated—and unwanted—behavior that emerges from a complex interplay of individual features

  • A high number of implementation bugs and, potentially, architectural flaws due to the sheer size of a system

1.4.1 Complexity versus security – features

The following thought experiment illustrates why complexity arising from the number of features or options is a major security risk. Imagine an IT system, say a small web server, whose configuration consists of 30 binary parameters (that is, each parameter has only two possible values, such as on or off). Such a system has more than a billion possible configurations. To guarantee that the system is secure under all configurations, its developers would need to write and run several billion tests: one test for each relevant type of attack (e.g., Denial-of-Service, cross-site scripting, and directory traversal) and each configuration. This is impossible in practice, especially because software changes over time, with new features being added and existing features being refactored. Moreover, real-world IT systems have significantly more than 30 binary parameters. As an example, the NGINX web server has nearly 800 directives for configuring how the NGINX worker processes handle connections.

1.4.2 Complexity versus security – emergent behavior

A related phenomenon that creates security risks in complex systems is the unanticipated emergent behavior. Complex systems tend to have properties that their parts do not have on their own, that is, properties or behaviors that emerge only when the parts interact [186]. Prime examples for security vulnerabilities arising from emergent behavior are time-of-check-to-time-of-use (TOCTOU) attacks exploiting concurrency failures, replay attacks on cryptographic protocols where an attacker reuses an out-of-date message, and side-channel attacks exploiting unintended interplay between micro-architectural features for speculative execution.

1.4.3 Complexity versus security – bugs

Currently available software engineering processes, methods, and tools do not guarantee error-free software. Various studies on software quality indicate that, on average, 1,000 lines of source code contain 30-80 bugs [174]. In rare cases, examples of extensively tested software were reported that contain 0.5-3 bugs per 1,000 lines of code [125].

However, even a rate of 0.5-3 bugs per 1,000 lines of code is far from sufficient for most practical software systems. As an example, the Linux kernel 5.11, released in 2021, has around 30 million lines of code, roughly 14% of which are considered the ”core” part (arch, kernel, and mm directories). Consequently, even with extensive testing and validation, the Linux 5.11 core code alone would contain approximately 2,100-12,600 bugs.

And this is only the operating system core without any applications. As of July 2022, the popular Apache HTTP server consists of about 1.5 million lines of code. So, even assuming the low rate of 0.5-3 bugs per 1,000 lines of code, adding a web server to the core system would account for another 750-4,500 bugs.

Figure 1.7: Increase of Linux kernel size over the years

Figure 1.7: Increase of Linux kernel size over the years

What is even more concerning is the rate of bugs doesn’t seem to improve significantly enough over time to cope with the increasing software size. The extensively tested software having 0.5-3 bugs per 1,000 lines of code mentioned above was reported by Myers in 1986 [125]. On the other hand, a study performed by Carnegie Mellon University’s CyLab institute in 2004 identified 0.17 bugs per 1,000 lines of code in the Linux 2.6 kernel, a total of 985 bugs, of which 627 were in critical parts of the kernel. This amounts to slightly more than halving the bug rate at best – over almost 20 years.

Clearly, in that same period of time from 1986 to 2004 the size of typical software has more than doubled. As an example, Linux version 1.0, released in 1994, had about 170,000 lines of code. In comparison, Linux kernel 2.6, which was released in 2003, already had 8.1 million lines of code. This is approximately a 47-fold increase in size within less than a decade.

Figure 1.8: Reported security vulnerabilities per year

Figure 1.8: Reported security vulnerabilities per year