Book Image

Jupyter Cookbook

By : Dan Toomey
Book Image

Jupyter Cookbook

By: Dan Toomey

Overview of this book

Jupyter has garnered a strong interest in the data science community of late, as it makes common data processing and analysis tasks much simpler. This book is for data science professionals who want to master various tasks related to Jupyter to create efficient, easy-to-share, scientific applications. The book starts with recipes on installing and running the Jupyter Notebook system on various platforms and configuring the various packages that can be used with it. You will then see how you can implement different programming languages and frameworks, such as Python, R, Julia, JavaScript, Scala, and Spark on your Jupyter Notebook. This book contains intuitive recipes on building interactive widgets to manipulate and visualize data in real time, sharing your code, creating a multi-user environment, and organizing your notebook. You will then get hands-on experience with Jupyter Labs, microservices, and deploying them on the web. By the end of this book, you will have taken your knowledge of Jupyter to the next level to perform all key tasks associated with it.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
Index

Obtaining a sorted word count from a big-text source


Now that we have a word count, the more interesting use is to sort them by occurrence to determine the highest usage.

How to do it...

We can slightly modify the previous script to produce a sorted listed as follows:

import pyspark

if not 'sc' in globals():
 sc = pyspark.SparkContext()

text_file = sc.textFile("B09656_09_word_count.ipynb")
sorted_counts = text_file.flatMap(lambda line: line.split(" ")) \
 .map(lambda word: (word, 1)) \
 .reduceByKey(lambda a, b: a + b) \
 .sortByKey()

for x in sorted_counts.collect():
 print(x)

Producing the output as follows:

The list continues for every word found. Notice the descending order of occurrences and the sorting with words of the same occurrence. What Spark uses to determine word breaks does not appear to be too good.

How it works...

The coding is exactly the same as in the previous example, except for the last line, .sortByKey(). Our key, by default, is the word count column (as that is what we...