Book Image

Learning Jupyter

By : Dan Toomey
Book Image

Learning Jupyter

By: Dan Toomey

Overview of this book

Jupyter Notebook is a web-based environment that enables interactive computing in notebook documents. It allows you to create and share documents that contain live code, equations, visualizations, and explanatory text. The Jupyter Notebook system is extensively used in domains such as data cleaning and transformation, numerical simulation, statistical modeling, machine learning, and much more. This book starts with a detailed overview of the Jupyter Notebook system and its installation in different environments. Next we’ll help you will learn to integrate Jupyter system with different programming languages such as R, Python, JavaScript, and Julia and explore the various versions and packages that are compatible with the Notebook system. Moving ahead, you master interactive widgets, namespaces, and working with Jupyter in a multiuser mode. Towards the end, you will use Jupyter with a big data set and will apply all the functionalities learned throughout the book.
Table of Contents (16 chapters)
Learning Jupyter
About the Author
About the Reviewer

Apache Spark

One of the tools we will be using is Apache Spark. Spark is an open source toolset for cluster computing. While we will not be using a cluster, the typical usage for Spark is a larger set of machines or cluster that operate in parallel to analyze a big data set. An installation guide is available at In particular, you will need to add two settings to your bash profile: SPARK_HOME and PYSPARK_SUBMIT_ARGS. SPARK_HOME is the directory where the software is installed. PYSPARK_SUBMIT_ARGS sets the number of cores to use in the local cluster.

Mac installation

To install, we download the latest TGZ file from the Spark download page at, unpack the TGZ file, and move the unpacked directory to our Applications folder.

Spark relies on Scala's availability. We installed Scala in Chapter 7Sharing and Converting Jupyter Notebooks.

Open a command-line window to the Spark directory and run this...