Book Image

Julia for Data Science

By : Anshul Joshi
2 (1)
Book Image

Julia for Data Science

2 (1)
By: Anshul Joshi

Overview of this book

Julia is a fast and high performing language that's perfectly suited to data science with a mature package ecosystem and is now feature complete. It is a good tool for a data science practitioner. There was a famous post at Harvard Business Review that Data Scientist is the sexiest job of the 21st century. (https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century). This book will help you get familiarised with Julia's rich ecosystem, which is continuously evolving, allowing you to stay on top of your game. This book contains the essentials of data science and gives a high-level overview of advanced statistics and techniques. You will dive in and will work on generating insights by performing inferential statistics, and will reveal hidden patterns and trends using data mining. This has the practical coverage of statistics and machine learning. You will develop knowledge to build statistical models and machine learning systems in Julia with attractive visualizations. You will then delve into the world of Deep learning in Julia and will understand the framework, Mocha.jl with which you can create artificial neural networks and implement deep learning. This book addresses the challenges of real-world data science problems, including data cleaning, data preparation, inferential statistics, statistical modeling, building high-performance machine learning systems and creating effective visualizations using Julia.
Table of Contents (17 chapters)
Julia for Data Science
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface

Summary


Ensemble learning is a method for generating highly accurate classifiers by combining weak or less accurate ones. In this chapter, we discussed some of the methods for constructing ensembles and went through the three fundamental reasons why ensemble methods are able to outperform any single classifier within the ensemble.

We discussed bagging and boosting in detail. Bagging, also known as Bootstrap Aggregation, generates the additional data that is used for training by using sub-sampling on the same dataset with replacement. We also learned why AdaBoost performs so well and understood in detail about random forests. Random forests are highly accurate and efficient algorithms that don't overfit. We also studied how and why they are considered as one of the best ensemble models. We implemented a random forest model in Julia using the "DecisionTree" package.