Book Image

Jupyter for Data Science

By : Dan Toomey
Book Image

Jupyter for Data Science

By: Dan Toomey

Overview of this book

Jupyter Notebook is a web-based environment that enables interactive computing in notebook documents. It allows you to create documents that contain live code, equations, and visualizations. This book is a comprehensive guide to getting started with data science using the popular Jupyter notebook. If you are familiar with Jupyter notebook and want to learn how to use its capabilities to perform various data science tasks, this is the book for you! From data exploration to visualization, this book will take you through every step of the way in implementing an effective data science pipeline using Jupyter. You will also see how you can utilize Jupyter's features to share your documents and codes with your colleagues. The book also explains how Python 3, R, and Julia can be integrated with Jupyter for various data science tasks. By the end of this book, you will comfortably leverage the power of Jupyter to perform various tasks in data science successfully.
Table of Contents (17 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Using Spark to analyze data


The first thing to do in order to access Spark is to create a SparkContext. The SparkContext initializes all of Spark and sets up any access that may be needed to Hadoop, if you are using that as well.

The initial object used to be a SQLContext, but that has been deprecated recently in favor of SparkContext, which is more open-ended.

We could use a simple example to just read through a text file as follows:

from pyspark import SparkContextsc = SparkContext.getOrCreate()lines = sc.textFile("B05238_04 Spark Total Line Lengths.ipynb")lineLengths = lines.map(lambda s: len(s))totalLength = lineLengths.reduce(lambda a, b: a + b)print(totalLength)

In this example:

  • We obtain a SparkContext
  • With the context, read in a file (the Jupyter file for this example)
  • We use a Hadoop map function to split up the text file into different lines and gather the lengths
  • We use a Hadoop reduce function to calculate the length of all the lines
  • We display our results

Under Jupyter this looks like...