Book Image

Big Data Analysis with Python

By : Ivan Marin, Ankit Shukla, Sarang VK
Book Image

Big Data Analysis with Python

By: Ivan Marin, Ankit Shukla, Sarang VK

Overview of this book

Processing big data in real time is challenging due to scalability, information inconsistency, and fault tolerance. Big Data Analysis with Python teaches you how to use tools that can control this data avalanche for you. With this book, you'll learn practical techniques to aggregate data into useful dimensions for posterior analysis, extract statistical measurements, and transform datasets into features for other systems. The book begins with an introduction to data manipulation in Python using pandas. You'll then get familiar with statistical analysis and plotting techniques. With multiple hands-on activities in store, you'll be able to analyze data that is distributed on several computers by using Dask. As you progress, you'll study how to aggregate data for plots when the entire data cannot be accommodated in memory. You'll also explore Hadoop (HDFS and YARN), which will help you tackle larger datasets. The book also covers Spark and explains how it interacts with other tools. By the end of this book, you'll be able to bootstrap your own Python environment, process large files, and manipulate data to generate statistics, metrics, and graphs.
Table of Contents (11 chapters)
Big Data Analysis with Python
Preface

Summary


After a review of what big data is, we learned about some tools that were designed for the storage and processing of very large volumes of data. Hadoop is an entire ecosystem of frameworks and tools, such as HDFS, designed to store data in a distributed fashion in a huge number of commodity-computing nodes, and YARN, a resource and job manager. We saw how to manipulate data directly on the HDFS using the HDFS fs commands.

We also learned about Spark, a very powerful and flexible parallel processing framework that integrates well with Hadoop. Spark has different APIs, such as SQL, GraphX, and Streaming. We learned how Spark represents data in the DataFrame API and that its computation is similar to pandas’ methods. We also saw how to store data in an efficient manner using the Parquet file format, and how to improve performance when analyzing data using partitioning. To finish up, we saw how to handle unstructured data files, such as text.

In the next chapter, we will go more deeply...