Book Image

Apache Spark for Data Science Cookbook

By : Padma Priya Chitturi
Book Image

Apache Spark for Data Science Cookbook

By: Padma Priya Chitturi

Overview of this book

Spark has emerged as the most promising big data analytics engine for data science professionals. The true power and value of Apache Spark lies in its ability to execute data science tasks with speed and accuracy. Spark’s selling point is that it combines ETL, batch analytics, real-time stream analysis, machine learning, graph processing, and visualizations. It lets you tackle the complexities that come with raw unstructured data sets with ease. This guide will get you comfortable and confident performing data science tasks with Spark. You will learn about implementations including distributed deep learning, numerical computing, and scalable machine learning. You will be shown effective solutions to problematic concepts in data science using Spark’s data science libraries such as MLLib, Pandas, NumPy, SciPy, and more. These simple and efficient recipes will show you how to implement algorithms and optimize your work.
Table of Contents (17 chapters)
Apache Spark for Data Science Cookbook
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Outlier detection


Outliers are infrequent observations, that is, the data points that do not appear to follow the characteristic distribution of the rest of the data. They appear far away and diverge from the overall pattern of the data. These might occur due to measurement errors or other anomalies which result in wrong estimations. Outliers can be univariate and multivariate. Univariate outliers can be determined by looking at the distribution of a single variable whereas multivariate outliers are present in an n-dimensional space which can be found by looking at the distributions in multi-dimensions.

Getting ready

To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt file so that it downloads the related libraries and the API can be used...