Book Image

Learning Jupyter

By : Dan Toomey
Book Image

Learning Jupyter

By: Dan Toomey

Overview of this book

Jupyter Notebook is a web-based environment that enables interactive computing in notebook documents. It allows you to create and share documents that contain live code, equations, visualizations, and explanatory text. The Jupyter Notebook system is extensively used in domains such as data cleaning and transformation, numerical simulation, statistical modeling, machine learning, and much more. This book starts with a detailed overview of the Jupyter Notebook system and its installation in different environments. Next we’ll help you will learn to integrate Jupyter system with different programming languages such as R, Python, JavaScript, and Julia and explore the various versions and packages that are compatible with the Notebook system. Moving ahead, you master interactive widgets, namespaces, and working with Jupyter in a multiuser mode. Towards the end, you will use Jupyter with a big data set and will apply all the functionalities learned throughout the book.
Table of Contents (16 chapters)
Learning Jupyter
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface

Spark word count


Now that we have seen some of the functionality, let's explore further. We can use a similar script to count the word occurrences in a file, as follows:

import pyspark
if not 'sc' in globals():
    sc = pyspark.SparkContext()
text_file = sc.textFile("Spark File Words.ipynb")
counts = text_file.flatMap(lambda line: line.split(" ")) \
             .map(lambda word: (word, 1)) \
             .reduceByKey(lambda a, b: a + b)
for x in counts.collect():
    print x

We have the same preamble to the coding. Then we load the text file into memory.

Once the file is loaded, we split each line into words. Use a lambda function to tick off each occurrence of a word. The code is truly creating a new record for each word occurrence. If a word appears in the stream, a record with the count of 1 is added for that word and for every other instance the word appears, new records with the same count of 1 are added. The idea is that this process could be split over multiple processors, where each...