Book Image

Julia 1.0 Programming Cookbook

By : Bogumił Kamiński, Przemysław Szufel
Book Image

Julia 1.0 Programming Cookbook

By: Bogumił Kamiński, Przemysław Szufel

Overview of this book

Julia, with its dynamic nature and high-performance, provides comparatively minimal time for the development of computational models with easy-to-maintain computational code. This book will be your solution-based guide as it will take you through different programming aspects with Julia. Starting with the new features of Julia 1.0, each recipe addresses a specific problem, providing a solution and explaining how it works. You will work with the powerful Julia tools and data structures along with the most popular Julia packages. You will learn to create vectors, handle variables, and work with functions. You will be introduced to various recipes for numerical computing, distributed computing, and achieving high performance. You will see how to optimize data science programs with parallel computing and memory allocation. We will look into more advanced concepts such as metaprogramming and functional programming. Finally, you will learn how to tackle issues while working with databases and data processing, and will learn about on data science problems, data modeling, data analysis, data manipulation, parallel processing, and cloud computing with Julia. By the end of the book, you will have acquired the skills to work more effectively with your data
Table of Contents (18 chapters)
Title Page
Copyright and Credits
Dedication
About Packt
Contributors
Preface
Index

Writing a simple optimization routine


Performing optimization is a common data science task. In this recipe, we implement a simple optimization routine using the Marquardt algorithm. For more information on this, see http://mathworld.wolfram.com/Levenberg-MarquardtMethod.html or https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm. In the process of optimizing, we will also discover how Julia handles linear algebra and numerical differentiation.

The basic idea of the procedure is the following. Given a twice differentiable function, 

, and some point, 

, we want to find another point, 

, such that

. By repeating this process, we want to reach the local the minimum of the 

 function.

The rule for finding

 is as follows:

The idea is that we mix two standard algorithms: the Newton algorithm, where the step is

, and the gradient descent algorithm, where the step is

, where 

 is a parameter. We accept 

 if it leads to a decrease in the value of

.

The additional rule is that if we have found...