Book Image

Clojure High Performance Programming

By : Shantanu Kumar
Book Image

Clojure High Performance Programming

By: Shantanu Kumar

Overview of this book

<p>Clojure is a young, dynamic, functional programming language that runs on the Java Virtual Machine. It is built with performance, pragmatism, and simplicity in mind. Like most general purpose languages, Clojure’s features have different performance characteristics that one should know in order to write high performance code.<br /><br />Clojure High Performance Programming is a practical, to-the-point guide that shows you how to evaluate the performance implications of different Clojure abstractions, learn about their underpinnings, and apply the right approach for optimum performance in real-world programs.<br /><br />This book discusses the Clojure language in the light of performance factors that you can exploit in your own code.</p> <p>You will also learn about hardware and JVM internals that also impact Clojure’s performance. Key features include performance vocabulary, performance analysis, optimization techniques, and how to apply these to your programs. You will also find detailed information on Clojure's concurrency, state-management, and parallelization primitives.</p> <p>This book is your key to writing high performance Clojure code using the right abstraction, in the right place, using the right technique.</p>
Table of Contents (15 chapters)
Clojure High Performance Programming
Credits
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
Index

Performance vocabulary


There are several technical terms that are heavily used in performance engineering. It is important to understand them as they form the cornerstone of performance related discussions. Collectively, these terms form a performance vocabulary. Performance is usually measured in terms of several parameters where every parameter has roles to play; such parameters are part of the vocabulary.

Latency

Latency is the time taken by an individual unit of work to complete a task. It does not imply successful completion of a task. Latency is not collective; it is linked to a particular task. If two similar jobs, j1 and j2, took 3ms and 5ms respectively, their latencies would be treated as such. If j1 and j2 were dissimilar tasks, it would have made no difference. In many cases, average latency of similar jobs is used in performance objectives, measuring, and monitoring results.

Latency is an important indicator of the health of a system. A high performance system often thrives on low latency. Higher than normal latency can be caused due to load or a bottleneck. It helps to measure the latency distribution during a load test. For example, if more than 25 percent of similar jobs under a similar load have significantly higher latency than others, it may be an indicator of a bottleneck scenario worth investigating.

When a task, j1, consists of smaller tasks, say j2, j3, and j4, the latency of j1 is not necessarily the sum of latencies of each of the j2, j3, and j4 tasks. If any of the subtasks of j1 are concurrent with another, the latency of j1 will turn out to be less than the sum of the latencies of j2, j3, and j4. I/O bound tasks are generally more prone to higher latency. In network systems, latency is commonly based on the roundtrip to another host, including latency from source to destination and then back to source.

Throughput

Throughput is the number of successful tasks or operations performed in a unit of time. The top-level operations performed in a unit of time are usually of a similar kind but with potentially different latencies. So, what does throughput tell us about the system? It is the rate at which the system is performing. When you perform load testing, you can determine the maximum rate at which a particular system can perform. However, this is not a guarantee of conclusive overall maximum rate of performance.

Throughput is one of the factors that determine the scalability of a system. Throughput of a higher level task depends on the capacity to spawn off multiple such tasks in parallel and also depends on average latency of the tasks. Throughput should be measured during load testing and performance monitoring to determine peak measured throughput and maximum sustained throughput. These factors contribute to the scale and performance of a system.

Bandwidth

Bandwidth is the raw data rate over a communication channel measured in a certain number of bits per second. This includes not only the payload but all the overhead necessary to carry out the communication. A few examples are Kbits/sec, Mbits/sec, and so on. An uppercase B in KB/sec denotes 'Bytes', as in Kilo Bytes per second. Bandwidth is often compared to throughput. While bandwidth is the raw capacity, throughput for the same system is the successful task completion rate that usually involves a roundtrip. Note that throughput is for an operation which involves latency. To achieve maximum throughput for a given bandwidth, the communication/protocol overhead and operational latency should be minimal.

For storage systems (such as hard disks and solid-state drives), the predominant way to measure performance is IOPS (Input-output per second), which is multiplied by the transfer-size and represented as Bytes-per-second, or further into MB/sec, GB/sec, and so on. IOPS is usually derived for sequential and random workloads for read/write operations.

Mapping the throughput of a system to the bandwidth of another may lead to dealing with the impedance mismatch between the two. For example, an order processing system may transact to the database on disk and post results over the network to an external system.

Depending on the bandwidth of the disk subsystem, the bandwidth of the network, and the execution model of the order, processing the throughput may depend not only on the bandwidth of the disk subsystem and network, but also on how loaded they currently are. Parallelism and pipelining are common ways to increase throughput over a given bandwidth.

Baseline and benchmark

Performance baseline, or simply baseline, is the reference point including measurements of well characterized and understood performance parameters for a known configuration. Baseline is used to collect performance measurements for the same parameters which we may benchmark later for another configuration. For example, collecting "throughput distribution over 10 minutes at a load of 50 concurrent threads" is one such performance parameter we can use for baseline and benchmarking. A baseline is recorded together with the hardware, network, OS, and system configuration.

Performance benchmark, or simply benchmark, is the recording of performance parameter measurements under various test conditions. A benchmark can be composed as a performance test suite. A benchmark may collect a small to large amount of data, and may take a varying duration depending on use cases, scenarios, and environment characteristics.

Baseline is a result of a benchmark that was conducted at one point of time; however, benchmark is independent of baseline.

Profiling

Performance profiling, or simply profiling, is the analysis of the execution of a program at its runtime. A program can perform poorly for a variety of reasons. A profiler can analyze and find out the execution time of various parts of the program. It is possible to interleave statements in a program manually to print execution time of blocks of code, but this gets very cumbersome as you try to refine the code iteratively. A profiler is of great assistance to the developer.

Going by how profilers work, they are of three major kinds: instrumenting, sampling, and event-based. The event-based profilers work only for selected language platforms, and they provide a good balance between overhead and results; for example, Java supports event-based profiling via the JVMTI interface. Instrumenting profilers modify code at either compile time or runtime to inject performance counters. They are intrusive by nature and add significant performance overhead. However, you can profile regions of code very selectively using instrumenting profilers. Sampling profilers pause the runtime and collect its state at 'sampling intervals'. By collecting enough samples, it gets to know where the program spends most of its time. For example, at a sampling interval of 1ms, the profiler would have collected 1000 samples in a second. A sampling profiler also works for code that executes faster than the sampling interval, as the frequency of pausing and sampling is proportional to the overall execution time of any code.

Profiling is not meant only for measuring execution time. Capable profilers can provide a view of memory analysis, garbage collection, threads, and so on. A combination of such tools is helpful to find memory leaks, garbage collection issues, and so on.

Performance optimization

Simply put, optimization is minimizing a program's resource consumption after performance analysis. The symptoms of a poorly performing program are observed in terms of high latency, low throughput, unresponsiveness, instability, high memory consumption, and high CPU consumption. During performance analysis, you may profile the program in order to identify bottlenecks and tune the performance incrementally by observing performance parameters.

Better and suitable algorithms are an all-round good way to optimize code. CPU bound code can be optimized with computationally cheaper operations. Cache bound code can try using less memory lookups to keep a good hit ratio. Memory bound code can use adaptive memory usage and conservative data representation to store in memory for optimization. I/O bound code can attempt to serialize as little data as possible, and can batch operations to make the operation less chatty for better performance. Parallelism and distribution are other overall good ways to increase performance.

Concurrency and parallelism

Most of the computer hardware and operating systems we use today provide concurrency. On the x86 architecture, hardware support for concurrency can be traced as far back as the 80286 chip. Concurrency is the simultaneous execution of more than one process on the same computer. In older processors, concurrency was implemented using a context switch by the operating system kernel. When concurrent parts are executed in parallel by the hardware instead of merely switching context, it is called parallelism. Parallelism is the property of the hardware, though the software stack must support it in order for you to leverage it in your programs. You must write your program in a concurrent way to exploit the parallelism features of the hardware.

While concurrency is a natural way to exploit hardware parallelism and speed up operations, it is worth bearing in mind that having significantly higher concurrency than the parallelism your hardware can support is likely to schedule tasks to varying processor cores, thereby lowering branch prediction and increasing cache misses.

Low level processes/threads, mutexes, semaphores, locking, shared memory, and inter-process/thread communication are used for concurrency. The JVM has excellent support for these concurrency primitives and inter-thread communication. Clojure builds upon the JVM features to provide both low and higher level concurrency primitives that we will discuss in the concurrency chapter.

Resource utilization

Resource utilization is the measure of the server, network, and storage resources consumed by an application. Resources include CPU, memory, disk I/O, network I/O, and so on. The application can be analyzed in terms of CPU bound, memory bound, cache bound, and I/O bound tasks. Resource utilization can be derived by means of benchmarking by measuring the utilization at a given throughput.

Workload

Workload is the quantification of how much work there is in hand to be carried out by the application. It is measured in total numbers of users, concurrent active users, transaction volume, and data volume. Processing a workload should take into account the load conditions, such as how much data the database currently holds, how filled up are the message queues, and the backlog of I/O tasks after which the new load will be processed.