Book Image

Julia 1.0 High Performance - Second Edition

By : Avik Sengupta
Book Image

Julia 1.0 High Performance - Second Edition

By: Avik Sengupta

Overview of this book

Julia is a high-level, high-performance dynamic programming language for numerical computing. If you want to understand how to avoid bottlenecks and design your programs for the highest possible performance, then this book is for you. The book starts with how Julia uses type information to achieve its performance goals, and how to use multiple dispatches to help the compiler emit high-performance machine code. After that, you will learn how to analyze Julia programs and identify issues with time and memory consumption. We teach you how to use Julia's typing facilities accurately to write high-performance code and describe how the Julia compiler uses type information to create fast machine code. Moving ahead, you'll master design constraints and learn how to use the power of the GPU in your Julia code and compile Julia code directly to the GPU. Then, you'll learn how tasks and asynchronous IO help you create responsive programs and how to use shared memory multithreading in Julia. Toward the end, you will get a flavor of Julia's distributed computing capabilities and how to run Julia programs on a large distributed cluster. By the end of this book, you will have the ability to build large-scale, high-performance Julia applications, design systems with a focus on speed, and improve the performance of existing programs.
Table of Contents (19 chapters)
Title Page
Dedication
Foreword
Licences

Programming parallel tasks

The low-level facilities examined previously are quite flexible and very powerful. However, they leave a lot to be desired in terms of ease of use. Julia, therefore, has built a set of higher-level programming tools that make it much easier to write parallel code.

The @everywhere macro

The @everywhere macro is used to run the same code in all the processes in the cluster. This is useful for setting up the environment for running the actual parallel computation later. The following code loads the Distributions package and calls the rand method on all the nodes simultaneously:

julia> using Pkg

julia> Pkg.add("Distributions")
...

julia> using Distributions

julia> @everywhere using Distributions...