Book Image

Learning Jupyter 5 - Second Edition

Book Image

Learning Jupyter 5 - Second Edition

Overview of this book

The Jupyter Notebook allows you to create and share documents that contain live code, equations, visualizations, and explanatory text. The Jupyter Notebook system is extensively used in domains such as data cleaning and transformation, numerical simulation, statistical modeling, and machine learning. Learning Jupyter 5 will help you get to grips with interactive computing using real-world examples. The book starts with a detailed overview of the Jupyter Notebook system and its installation in different environments. Next, you will learn to integrate the Jupyter system with different programming languages such as R, Python, Java, JavaScript, and Julia, and explore various versions and packages that are compatible with the Notebook system. Moving ahead, you will master interactive widgets and namespaces and work with Jupyter in a multi-user mode. By the end of this book, you will have used Jupyter with a big dataset and be able to apply all the functionalities you’ve explored throughout the book. You will also have learned all about the Jupyter Notebook and be able to start performing data transformation, numerical simulation, and data visualization.
Table of Contents (18 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Python data access in Jupyter


Now that we have seen how Python works in Jupyter, including the underlying encoding, how does Python access a large dataset of work in Jupyter?

I started another view for pandas, using Python data access as the name. From here, we will read in a large dataset and compute some standard statistics on the data. We are interested in seeing how we use pandas in Jupyter, how well the script performs, and what information is stored in the metadata (especially if it is a larger dataset).

Our script accesses the iris dataset that is built-in to one of the Python packages. All we are looking to do is read in a slightly large number of items and calculate some basic operations on the dataset. We are really interested to see how much of the data is cached in the .pynb file.

The Python code is as follows: 

# import the datasets package 
from sklearn import datasets 
 
# pull in the iris data 
iris_dataset = datasets.load_iris() 
# grab the first two columns of data 
X = iris_dataset...