Book Image

Julia 1.0 Programming Complete Reference Guide

By : Ivo Balbaert, Adrian Salceanu
Book Image

Julia 1.0 Programming Complete Reference Guide

By: Ivo Balbaert, Adrian Salceanu

Overview of this book

Julia offers the high productivity and ease of use of Python and R with the lightning-fast speed of C++. There’s never been a better time to learn this language, thanks to its large-scale adoption across a wide range of domains, including fintech, biotech and artificial intelligence (AI). You will begin by learning how to set up a running Julia platform, before exploring its various built-in types. This Learning Path walks you through two important collection types: arrays and matrices. You’ll be taken through how type conversions and promotions work, and in further chapters you'll study how Julia interacts with operating systems and other languages. You’ll also learn about the use of macros, what makes Julia suitable for numerical and scientific computing, and how to run external programs. Once you have grasped the basics, this Learning Path goes on to how to analyze the Iris dataset using DataFrames. While building a web scraper and a web app, you’ll explore the use of functions, methods, and multiple dispatches. In the final chapters, you'll delve into machine learning, where you'll build a book recommender system. By the end of this Learning Path, you’ll be well versed with Julia and have the skills you need to leverage its high speed and efficiency for your applications. This Learning Path includes content from the following Packt products: • Julia 1.0 Programming - Second Edition by Ivo Balbaert • Julia Programming Projects by Adrian Salceanu
Table of Contents (18 chapters)

How Julia works

(You can safely skip this section on a first reading.) Julia works with an LLVM JIT compiler framework that is used for JIT generation of machine code. The first time you run a Julia function, it is parsed, and the types are inferred. Then, LLVM code is generated by the JIT compiler, which is then optimized and compiled down to native code. The second time you run a Julia function, the native code that's already generated is called. This is the reason why, the second time you call a function with arguments of a specific type, it takes much less time to run than the first time (keep this in mind when doing benchmarks of Julia code).

This generated code can be inspected. Suppose, for example, that we have defined a f(x) = 2x + 5 function in a REPL session. Julia responds with the message f (generic function with one method); the code is dynamic because we didn't have to specify the type of x or f. Functions are, by default, generic in Julia because they are ready to work with different data types for their variables.

The code_llvm function can be used to see the JIT bytecode. This is the bytecode generated by LLVM, and it will be different for each target platform. For example, for the Intel x64 platform, if the x argument is of type Int64, it will be as follows:

julia> code_llvm(f, (Int64,)) 
 
; Function f 
; Location: REPL[7]:1 
; Function Attrs: uwtable 
define i64 @julia_f_33833(i64) #0 { 
top: 
; Function *; { 
; Location: int.jl:54 
  %1 = shl i64 %0, 1 
;} 
; Function +; { 
; Location: int.jl:53 
  %2 = add i64 %1, 5 
;} 
  ret i64 %2 
} 

The code_native function can be used to see the assembly code that was generated for the same type of x:

julia> code_native(f, (Int64,)) 
 
        .text 
; Function f { 
; Location: REPL[7]:1 
        pushq   %rbp 
        movq    %rsp, %rbp 
; Function +; { 
; Location: int.jl:53 
        leaq    5(%rcx,%rcx), %rax 
;} 
        popq    %rbp 
        retq 
        nopl    (%rax,%rax) 
;} 

Compare this with the code generated when x is of type Float64:

julia> code_native(f, (Float64,)) 
 
        .text 
; Function f { 
; Location: REPL[7]:1 
        pushq   %rbp 
        movq    %rsp, %rbp 
; Function *; { 
; Location: promotion.jl:314 
; Function *; { 
; Location: float.jl:399 
        vaddsd  %xmm0, %xmm0, %xmm0 
        movabsq $424735072, %rax        # imm = 0x1950F160 
;}} 
; Function +; { 
; Location: promotion.jl:313 
; Function +; { 
; Location: float.jl:395 
        vaddsd  (%rax), %xmm0, %xmm0 
;}} 
        popq    %rbp 
        retq 
        nopl    (%rax,%rax) 
;} 

Julia code is fast because it generates specialized versions of functions for each data type. Julia also implements automatic memory management. The user doesn't have to worry about allocating and keeping track of the memory for specific objects. Automatic deletion of objects that are not needed anymore (and hence, reclamation of the memory associated with those objects) is done using a garbage collector (GC).

The GC runs at the same time as your program. Exactly when a specific object is garbage collected is unpredictable. The GC implements an incremental mark-and-sweep algorithm. You can start garbage collection yourself by calling GC.gc(), or if you don't need it, you can disable it by calling GC.enable(false).

The standard library is implemented in Julia itself. The I/O functions rely on the libuv library for an efficient, platform-independent I/O. The standard library is contained in a package called Base, which is automatically imported when starting Julia.