Book Image

Julia 1.0 High Performance - Second Edition

By : Avik Sengupta
Book Image

Julia 1.0 High Performance - Second Edition

By: Avik Sengupta

Overview of this book

Julia is a high-level, high-performance dynamic programming language for numerical computing. If you want to understand how to avoid bottlenecks and design your programs for the highest possible performance, then this book is for you. The book starts with how Julia uses type information to achieve its performance goals, and how to use multiple dispatches to help the compiler emit high-performance machine code. After that, you will learn how to analyze Julia programs and identify issues with time and memory consumption. We teach you how to use Julia's typing facilities accurately to write high-performance code and describe how the Julia compiler uses type information to create fast machine code. Moving ahead, you'll master design constraints and learn how to use the power of the GPU in your Julia code and compile Julia code directly to the GPU. Then, you'll learn how tasks and asynchronous IO help you create responsive programs and how to use shared memory multithreading in Julia. Toward the end, you will get a flavor of Julia's distributed computing capabilities and how to run Julia programs on a large distributed cluster. By the end of this book, you will have the ability to build large-scale, high-performance Julia applications, design systems with a focus on speed, and improve the performance of existing programs.
Table of Contents (19 chapters)
Title Page
Dedication
Foreword
Licences

ArrayFire

The ArrayFire library provides a high-level abstraction that makes writing massively parallel programs much simpler to write. The underlying library is written in C++, and the Julia wrapper provides an Array abstraction that allows idiomatic Julia programs to be executed on the GPU. 

To begin, install the ArrayFire library for your operating system from https://arrayfire.com/download/ and install it on your GPU machine. Once installed, ArrayFire provides a wrapper around an array that copies data from the CPU to the GPU, and performs operations on that data on the GPU cores. It really is that simple.

In the following example, we create a random 2D matrix, copy it to the GPU, multiply it by itself, and then copy the result back to the main memory:

julia> using ArrayFire

julia> a=rand(1000,1000);
1000×1000 Array{Float64,2}:
...

julia&gt...