Book Image

Apache Spark 2.x Machine Learning Cookbook

By : Mohammed Guller, Siamak Amirghodsi, Shuen Mei, Meenakshi Rajendran, Broderick Hall
Book Image

Apache Spark 2.x Machine Learning Cookbook

By: Mohammed Guller, Siamak Amirghodsi, Shuen Mei, Meenakshi Rajendran, Broderick Hall

Overview of this book

Machine learning aims to extract knowledge from data, relying on fundamental concepts in computer science, statistics, probability, and optimization. Learning about algorithms enables a wide range of applications, from everyday tasks such as product recommendations and spam filtering to cutting edge applications such as self-driving cars and personalized medicine. You will gain hands-on experience of applying these principles using Apache Spark, a resilient cluster computing system well suited for large-scale machine learning tasks. This book begins with a quick overview of setting up the necessary IDEs to facilitate the execution of code examples that will be covered in various chapters. It also highlights some key issues developers face while working with machine learning algorithms on the Spark platform. We progress by uncovering the various Spark APIs and the implementation of ML algorithms with developing classification systems, recommendation engines, text analytics, clustering, and learning systems. Toward the final chapters, we’ll focus on building high-end applications and explain various unsupervised methodologies and challenges to tackle when implementing with big data ML systems.
Table of Contents (20 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Downloading a complete dump of Wikipedia for a real-life Spark ML project


In this recipe, we will be downloading and exploring a dump of so we can have a real-life example. The dataset that we will be downloading in this recipe is a dump of Wikipedia articles. You will need the command-line tool curl, or a browser to retrieve a compressed file, which is about 13.6 GB at this time. Due to the size, we recommend the curl command-line tool.

How to do it...

  1. You can start with downloading the dataset using the following command:
curl -L -O http://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles-multistream.xml.bz2
  1. Now you want to decompress the ZIP file:
bunzip2 enwiki-latest-pages-articles-multistream.xml.bz2

This should create an uncompressed file which is named enwiki-latest-pages-articles-multistream.xml and is about 56 GB.

  1. Let us take a look at the Wikipedia XML file:
head -n50 enwiki-latest-pages-articles-multistream.xml
<mediawiki xmlns=http://www.mediawiki.org/xml/export-0...