Book Image

Python Data Analysis - Third Edition

By : Avinash Navlani, Ivan Idris
5 (1)
Book Image

Python Data Analysis - Third Edition

5 (1)
By: Avinash Navlani, Ivan Idris

Overview of this book

Data analysis enables you to generate value from small and big data by discovering new patterns and trends, and Python is one of the most popular tools for analyzing a wide variety of data. With this book, you’ll get up and running using Python for data analysis by exploring the different phases and methodologies used in data analysis and learning how to use modern libraries from the Python ecosystem to create efficient data pipelines. Starting with the essential statistical and data analysis fundamentals using Python, you’ll perform complex data analysis and modeling, data manipulation, data cleaning, and data visualization using easy-to-follow examples. You’ll then understand how to conduct time series analysis and signal processing using ARMA models. As you advance, you’ll get to grips with smart processing and data analytics using machine learning algorithms such as regression, classification, Principal Component Analysis (PCA), and clustering. In the concluding chapters, you’ll work on real-world examples to analyze textual and image data using natural language processing (NLP) and image analytics techniques, respectively. Finally, the book will demonstrate parallel computing using Dask. By the end of this data analysis book, you’ll be equipped with the skills you need to prepare data for analysis and create meaningful data visualizations for forecasting values from data.
Table of Contents (20 chapters)
1
Section 1: Foundation for Data Analysis
6
Section 2: Exploratory Data Analysis and Data Cleaning
11
Section 3: Deep Dive into Machine Learning
15
Section 4: NLP, Image Analytics, and Parallel Computing

Reading and writing data from Parquet

The Parquet file format provides columnar serialization for pandas DataFrames. It reads and writes DataFrames efficiently in terms of storage and performance and shares data across distributed systems without information loss. The Parquet file format does not support duplicate and numeric columns.

There are two engines used to read and write Parquet files in pandas: pyarrow and the fastparquet engine. pandas's default Parquet engine is pyarrow; if pyarrow is unavailable, then it uses fastparquet. In our example, we are using pyarrow. Let's install pyarrow using pip:

pip install pyarrow

You can also install the pyarrow engine in the Jupyter Notebook by putting an ! before the pip keyword. Here is an example:

!pip install pyarrow

Let's write a file using the pyarrow engine:

# Write to a parquet file.
df.to_parquet('employee.parquet', engine='pyarrow')

In the preceding code example, we have written the using to_parquet...