Book Image

Mastering C# Concurrency

Book Image

Mastering C# Concurrency

Overview of this book

Starting with the traditional approach to concurrency, you will learn how to write multithreaded concurrent programs and compose ways that won't require locking. You will explore the concepts of parallelism granularity, and fine-grained and coarse-grained parallel tasks by choosing a concurrent program structure and parallelizing the workload optimally. You will also learn how to use task parallel library, cancellations, timeouts, and how to handle errors. You will know how to choose the appropriate data structure for a specific parallel algorithm to achieve scalability and performance. Further, you'll learn about server scalability, asynchronous I/O, and thread pools, and write responsive traditional Windows and Windows Store applications. By the end of the book, you will be able to diagnose and resolve typical problems that could happen in multithreaded applications.
Table of Contents (17 chapters)
Mastering C# Concurrency
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Chapter 3. Understanding Parallelism Granularity

One of the most essential tasks when writing parallel code is to divide your program into subsets that will run in parallel and communicate between each other. Sometimes the task naturally divides into separate pieces, but usually it is up to you to choose which parts to make parallel. Should we use a small number of large tasks, many small tasks, or maybe large and small tasks at the same time?

Theoretically speaking, it does not matter. In case of an ideal computational device, it would have no overhead for creating a worker thread and distributing work between any numbers of threads. However, on a real CPU, this performance overhead is significant and it is very important to take this into account. The right way to split your program into parallel parts is the key to writing effective and fast programs. In this chapter, we are going to review this problem in detail.