Book Image

Jupyter Cookbook

By : Dan Toomey
Book Image

Jupyter Cookbook

By: Dan Toomey

Overview of this book

Jupyter has garnered a strong interest in the data science community of late, as it makes common data processing and analysis tasks much simpler. This book is for data science professionals who want to master various tasks related to Jupyter to create efficient, easy-to-share, scientific applications. The book starts with recipes on installing and running the Jupyter Notebook system on various platforms and configuring the various packages that can be used with it. You will then see how you can implement different programming languages and frameworks, such as Python, R, Julia, JavaScript, Scala, and Spark on your Jupyter Notebook. This book contains intuitive recipes on building interactive widgets to manipulate and visualize data in real time, sharing your code, creating a multi-user environment, and organizing your notebook. You will then get hands-on experience with Jupyter Labs, microservices, and deploying them on the web. By the end of this book, you will have taken your knowledge of Jupyter to the next level to perform all key tasks associated with it.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
Index

Examining big-text log file access


MonitorWare is a network monitoring solution for Windows machines. It has sample log files that show access to different systems. I downloaded the HTTP log file sample set from http://www.monitorware.com/en/logsamples/apache.php. The log file has entries for different HTTP requests made to a server. 

The URl downloads the apache-samples.rar file. A .rar file is a type of compressed format for very large files that would overwhelm the normal .zip file format. This example is only 20 KB. You need to extract the log file from the .rar file for access in the following coding.

How to do it...

We can use a similar script that loads the file, and then we use additional functions to pull out the records of interest. The coding is:

import pyspark

if not 'sc' in globals():
    sc = pyspark.SparkContext()

textFile = sc.textFile("access_log")
print(textFile.count(),"access records")

gets = textFile.filter(lambda line: "GET" in line)
print(gets.count(),"GETs")

posts...