Book Image

Julia 1.0 Programming Complete Reference Guide

By : Ivo Balbaert, Adrian Salceanu
Book Image

Julia 1.0 Programming Complete Reference Guide

By: Ivo Balbaert, Adrian Salceanu

Overview of this book

Julia offers the high productivity and ease of use of Python and R with the lightning-fast speed of C++. There’s never been a better time to learn this language, thanks to its large-scale adoption across a wide range of domains, including fintech, biotech and artificial intelligence (AI). You will begin by learning how to set up a running Julia platform, before exploring its various built-in types. This Learning Path walks you through two important collection types: arrays and matrices. You’ll be taken through how type conversions and promotions work, and in further chapters you'll study how Julia interacts with operating systems and other languages. You’ll also learn about the use of macros, what makes Julia suitable for numerical and scientific computing, and how to run external programs. Once you have grasped the basics, this Learning Path goes on to how to analyze the Iris dataset using DataFrames. While building a web scraper and a web app, you’ll explore the use of functions, methods, and multiple dispatches. In the final chapters, you'll delve into machine learning, where you'll build a book recommender system. By the end of this Learning Path, you’ll be well versed with Julia and have the skills you need to leverage its high speed and efficiency for your applications. This Learning Path includes content from the following Packt products: • Julia 1.0 Programming - Second Edition by Ivo Balbaert • Julia Programming Projects by Adrian Salceanu
Table of Contents (18 chapters)

Summary

Web scraping is a key component of data mining and Julia provides a powerful toolbox for handling these tasks. In this chapter, we addressed the fundamentals of building a web crawler. We learned how to request web pages with a Julia web client and how to read the responses, how to work with Julia's powerful Dict data structure to read HTTP information, how to make our software more resilient by handling errors, how to better organize our code by writing functions and documenting them, and how to use conditional logic to make decisions.

Armed with this knowledge, we built the first version of our web crawler. In the next chapter, we will improve it and will use it to extract the data for our upcoming Wiki game. In the process, we'll dive deeper into the language, learning about types, methods and modules, and how to interact with relational databases.

...