Book Image

Julia Programming Projects

By : Adrian Salceanu
Book Image

Julia Programming Projects

By: Adrian Salceanu

Overview of this book

Julia is a new programming language that offers a unique combination of performance and productivity. Its powerful features, friendly syntax, and speed are attracting a growing number of adopters from Python, R, and Matlab, effectively raising the bar for modern general and scientific computing. After six years in the making, Julia has reached version 1.0. Now is the perfect time to learn it, due to its large-scale adoption across a wide range of domains, including fintech, biotech, education, and AI. Beginning with an introduction to the language, Julia Programming Projects goes on to illustrate how to analyze the Iris dataset using DataFrames. You will explore functions and the type system, methods, and multiple dispatch while building a web scraper and a web app. Next, you'll delve into machine learning, where you'll build a books recommender system. You will also see how to apply unsupervised machine learning to perform clustering on the San Francisco business database. After metaprogramming, the final chapters will discuss dates and time, time series analysis, visualization, and forecasting. We'll close with package development, documenting, testing and benchmarking. By the end of the book, you will have gained the practical knowledge to build real-world applications in Julia.
Table of Contents (19 chapters)
Title Page
Copyright and Credits
Dedication
About Packt
Contributors
Preface
Index

Learning about hybrid recommender systems


There are some clear advantages when using model-based recommenders. As mentioned already, scalability is one of the most important. Usually, the models are much smaller than the initial dataset, so that even for very large data samples, the models are small enough to allow efficient usage. Another benefit is the speed. The time required to query the model, as opposed to querying the whole dataset, is usually considerably smaller.

These advantages stem from the fact that the models are generally prepared offline, allowing for almost instantaneous recommendations. But since there's no such thing as free performance, this approach also comes with a few significant negatives—on one hand, it is less flexible, because building the models takes considerable time and resources, making the updates difficult and costly; on the other hand, because it does not use the whole dataset, the predictions can be less accurate.

As with everything, there's no silver bullet...