Book Image

Big Data Analytics with Hadoop 3

By : Sridhar Alla
Book Image

Big Data Analytics with Hadoop 3

By: Sridhar Alla

Overview of this book

Apache Hadoop is the most popular platform for big data processing, and can be combined with a host of other big data tools to build powerful analytics solutions. Big Data Analytics with Hadoop 3 shows you how to do just that, by providing insights into the software as well as its benefits with the help of practical examples. Once you have taken a tour of Hadoop 3’s latest features, you will get an overview of HDFS, MapReduce, and YARN, and how they enable faster, more efficient big data processing. You will then move on to learning how to integrate Hadoop with the open source tools, such as Python and R, to analyze and visualize data and perform statistical computing on big data. As you get acquainted with all this, you will explore how to use Hadoop 3 with Apache Spark and Apache Flink for real-time data analytics and stream processing. In addition to this, you will understand how to use Hadoop to build analytics solutions on the cloud and an end-to-end pipeline to perform big data analysis using practical use cases. By the end of this book, you will be well-versed with the analytical capabilities of the Hadoop ecosystem. You will be able to build powerful solutions to perform big data analytics and get insight effortlessly.
Table of Contents (18 chapters)
Title Page
Copyright and Credits
Packt Upsell
Scientific Computing and Big Data Analysis with Python and Hadoop

Data analysis

Download OnlineRetail.csv from the link provided with the book. Then, you can load the file using Pandas.

The following is a simple way of reading a local file using Pandas:

import pandas as pd
path = '/Users/sridharalla/Documents/OnlineRetail.csv'
df = pd.read_csv(path)

However, since we are analyzing data in a Hadoop cluster, we should be using hdfs not a local system. The following is an example of how the hdfs file can be loaded into a pandas DataFrame:

import pandas as pd
from hdfs import InsecureClient
client_hdfs = InsecureClient('http://localhost:9870')
with'/user/normal/OnlineRetail.csv', encoding = 'utf-8') as reader:
 df = pd.read_csv(reader,index_col=0)

The following is what the following line of code does:


You will get the following result:

Basically, it displays the top three entries in the DataFrame.

We can now experiment with the data. Enter the following:


That should output this:


That just means the length, or size, of the DataFrame...