Book Image

Apache Spark for Data Science Cookbook

By : Padma Priya Chitturi
Book Image

Apache Spark for Data Science Cookbook

By: Padma Priya Chitturi

Overview of this book

Spark has emerged as the most promising big data analytics engine for data science professionals. The true power and value of Apache Spark lies in its ability to execute data science tasks with speed and accuracy. Spark’s selling point is that it combines ETL, batch analytics, real-time stream analysis, machine learning, graph processing, and visualizations. It lets you tackle the complexities that come with raw unstructured data sets with ease. This guide will get you comfortable and confident performing data science tasks with Spark. You will learn about implementations including distributed deep learning, numerical computing, and scalable machine learning. You will be shown effective solutions to problematic concepts in data science using Spark’s data science libraries such as MLLib, Pandas, NumPy, SciPy, and more. These simple and efficient recipes will show you how to implement algorithms and optimize your work.
Table of Contents (17 chapters)
Apache Spark for Data Science Cookbook
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Working with Spark's Python and Scala shells


This recipe explains the spark-shell and PySpark command-line interface tools from the Apache Spark project. Spark-shell is the Scala-based command line interface tool and PySpark is the Python-based command-line tool used to develop Spark interactive applications. They are already initialized with SparkContext, SQLContext, and HiveContext.

How to do it…

Both spark-shell and PySpark are available in the bin directory of SPARK_HOME, that is, SPARK_HOME/bin:

  1. Invoke spark-shell as follows:

        $SPARK_HOME/bin/spark-shell [Options] 
     
        $SPARK_HOME/bin/spark-shell --master <master type> i.e., local, 
        spark, yarn, mesos. 
        $SPARK_HOME/bin/spark-shell --master
        spark://<sparkmasterHostName>:7077 
     
        Welcome to 
             ____              __ 
            / __/__  ___ _____/ /__ 
           _\ \/ _ \/ _ `/ __/  '_/ 
          /__ / .__/\_,_/_/ /_/\_\   version 1.6.0 
             /_/ 
     
        Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java  
        1.7.0_79) 
        Type in expressions to have them evaluated. 
        Type :help for more information. 
        16/01/17 20:05:38 WARN Utils: Your hostname, localhost resolves to
        a loopback address: 127.0.0.1; using 192.168.1.6 instead (on 
        interface en0)
      SQL context available as sqlContext. 
     
        scala> val data = sc.textFile("hdfs://namenode:9000/stocks.txt"); 
        data: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[1]
        textFile at <console>:27 
     
        scala> data.count() 
        res0: Long = 57391  
     
        scala> data.first() 
        res1: String = NYSE  CLI   2009-12-31  35.39 35.70 34.50 34.5
                       890100         34.12 
     
        scala> data.top(2) 
        res5: Array[String] = Array(NYSE CZZ   2009-12-31  8.77  8.77  8.67
             8.70  694200    8.70, NYSE  CZZ   2009- 12-30  8.71  8.80 
             8.46    8.68  1588200     8.68) 
     
        scala> val mydata = data.map(line => line.toLowerCase()) 
        mydata: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[3] at
        map at <console>:29 
     
        scala> mydata.collect() 
        res6: Array[String] = Array(nyse cli 2009-12-31 35.39 35.70
        34.50 34.57 890100 34.12, nyse cli 2009-12-30 35.22 35.46
        34.96 35.40 516900 34.94, nyse cli 2009-12-29 35.69 35.95 
        35.21 35.34 556500 34.88, nyse cli 2009-12-28 35.67 36.23 
        35.49 35.69 565000 35.23, nyse cli 2009-12-24 35.38 35.60 
        35.19 35.47 230200 35.01, nyse cli 2009-12-23 35.13 35.51 
        35.07 35.21 520200 34.75, nyse cli 2009-12-22 34.76 35.04 
        34.71 35.04 564600 34.58, nyse cli 2009-12-21 34.65 34.74
     34.41 34.73 428400 34.28, nyse cli 2009-12-18 34.11 34.38 
        33.73 34.22 1152600 33.77, nyse cli 2009-12-17 34.18 34.53 
        33.84 34.21 1082600 33.76, nyse cli 2009-12-16 34.79 35.10 
        34.48 34.66 1007900 34.21, nyse cli 2009-12-15 34.60 34.91 
        34.39 34.84 813200 34.39, nyse cli 2009-12-14 34.21 34.90 
        33.86 34.82 987700 34.37, nyse cli 200...)
    
  2. Invoke PySpark as follows:

        $SPARK_HOME/bin/pyspark [options] 
        $SPARK_HOME/bin/pyspark --master <master type> i.e., local, 
        spark, yarn, mesos 
        $SPARK_HOME/bin/pyspark --master spark://
        sparkmasterHostName:7077 
     
        Python 2.7.6 (default, Sep  9 2014, 15:04:36)  
        [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin 
        Type "help", "copyright", "credits" or "license" for more
        information. 
        Using Spark's default log4j profile: org/apache/spark/log4j-
        defaults.properties 
        16/01/17 20:25:48 INFO SparkContext: Running Spark version 1.6.0 
        ... 
     
        Welcome to 
             ____              __ 
            / __/__  ___ _____/ /__ 
           _\ \/ _ \/ _ `/ __/  '_/ 
          /__ / .__/\_,_/_/ /_/\_\   version 1.6.0 
             /_/
    Using Python version 2.7.6 (default, Sep  9 2014 15:04:36) 
        SparkContext available as sc, HiveContext available as sqlContext. 
     
        >>> data = sc.textFile"hdfs://namenode:9000/stocks.txt"); 
     
        >>> data.count() 
        57391  
        >>> data.first() 
        NYSE     CLI   2009-12-31  35.39 35.70 34.50 34.57 890100
        34.12 
        >>> data.top(2) 
        ['NYSE   CZZ   2009-12-31  8.77  8.77  8.67  8.70  694200   8.70',
         'NYSE   CZZ   2009-12-30  8.71  8.80  8.46  8.68  1588200  8.68' ] 
     
        >>> data.collect()
     ['NYSE CLI 2009-12-31 35.39 35.70 34.50 34.57 890100 34.12, 
         'NYSE CLI 2009-12-30 35.22 35.46 34.96 35.40 516900 34.94, 
         'NYSE CLI 2009-12-29 35.69 35.95 35.21 35.34 556500 34.88', 
         'NYSE CLI 2009-12-28 35.67 36.23 35.49 35.69 565000 35.23', 
         'NYSE CLI 2009-12-24 35.38 35.60 35.19 35.47 230200 35.01', 
         'NYSE CLI 2009-12-23 35.13 35.51 35.07 35.21 520200 34.75', 
         'NYSE CLI 2009-12-22 34.76 35.04 34.71 35.04 564600 34.58', 
         'NYSE CLI 2009-12-21 34.65 34.74 34.41 34.73 428400 34.28', 
         'NYSE CLI 2009-12-18 34.11 34.38 33.73 34.22 1152600 33.77', 
         'NYSE CLI 2009-12-17 34.18 34.53 33.84 34.21 1082600 33.76', 
         'NYSE CLI 2009-12-16 34.79 35.10 34.48 34.66 1007900 34.21', 
         'NYSE CLI 2009-12-15 34.60 34.91 34.39 34.84 813200 34.39', 
         'NYSE CLI 2009-12-14 34.21 34.90 33.86 34.82 987700 34.37', 
         'NYSE CLI 200...
    

How it works…

In the preceding code snippets, Spark RDD transformations and actions are executed interactively in both Spark-shell and PySpark. They work in Read Eval Print Loop (REPL) style and represent a computer environment such as a Window console or Unix/Linux shell where a command is entered and the system responds with an output in interactive mode.

There's more…

Both Spark-shell and PySpark are better command-line interfaces for developing Spark applications interactively. They have advanced features for application prototyping and quicker development. Also, they have numerous options for customizing them.

See also

The Apache Spark documentation offers plenty of examples using these two command-line interfaces; please refer to this documentation page: http://spark.apache.org/docs/latest/quick-start.html#interactive-analysis-with-the-spark-shell.