Book Image

Learning Jupyter 5 - Second Edition

Book Image

Learning Jupyter 5 - Second Edition

Overview of this book

The Jupyter Notebook allows you to create and share documents that contain live code, equations, visualizations, and explanatory text. The Jupyter Notebook system is extensively used in domains such as data cleaning and transformation, numerical simulation, statistical modeling, and machine learning. Learning Jupyter 5 will help you get to grips with interactive computing using real-world examples. The book starts with a detailed overview of the Jupyter Notebook system and its installation in different environments. Next, you will learn to integrate the Jupyter system with different programming languages such as R, Python, Java, JavaScript, and Julia, and explore various versions and packages that are compatible with the Notebook system. Moving ahead, you will master interactive widgets and namespaces and work with Jupyter in a multi-user mode. By the end of this book, you will have used Jupyter with a big dataset and be able to apply all the functionalities you’ve explored throughout the book. You will also have learned all about the Jupyter Notebook and be able to start performing data transformation, numerical simulation, and data visualization.
Table of Contents (18 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Estimate pi


We can use map or reduce to estimate pi if we have code like this:

import pyspark 
import random 
if not 'sc' in globals(): 
    sc = pyspark.SparkContext() 
 
NUM_SAMPLES = 10000 
random.seed(113) 
 
def sample(p): 
    x, y = random.random(), random.random() 
    return 1 if x*x + y*y < 1 else 0 
 
count = sc.parallelize(range(0, NUM_SAMPLES)) \ 
    .map(sample) \ 
    .reduce(lambda a, b: a + b) 
     
print("Pi is roughly %f" % (4.0 * count / NUM_SAMPLES)) 

This code has the same preamble. We are using the Python random package. There is a constant for the number of samples to attempt.

We are building an RDD called count. We call the parallelize function to split this process between the nodes available. The code just maps the result of the sample function call. Finally, we reduce the generated map set by adding all the samples.

The sample function gets two random numbers and returns a one or a zero depending on where the two numbers end up in size. We are looking for random...