Book Image

Data Processing with Optimus

By : Dr. Argenis Leon, Luis Aguirre
Book Image

Data Processing with Optimus

By: Dr. Argenis Leon, Luis Aguirre

Overview of this book

Optimus is a Python library that works as a unified API for data cleaning, processing, and merging data. It can be used for handling small and big data on your local laptop or on remote clusters using CPUs or GPUs. The book begins by covering the internals of Optimus and how it works in tandem with the existing technologies to serve your data processing needs. You'll then learn how to use Optimus for loading and saving data from text data formats such as CSV and JSON files, exploring binary files such as Excel, and for columnar data processing with Parquet, Avro, and OCR. Next, you'll get to grips with the profiler and its data types - a unique feature of Optimus Dataframe that assists with data quality. You'll see how to use the plots available in Optimus such as histogram, frequency charts, and scatter and box plots, and understand how Optimus lets you connect to libraries such as Plotly and Altair. You'll also delve into advanced applications such as feature engineering, machine learning, cross-validation, and natural language processing functions and explore the advancements in Optimus. Finally, you'll learn how to create data cleaning and transformation functions and add a hypothetical new data processing engine with Optimus. By the end of this book, you'll be able to improve your data science workflow with Optimus easily.
Table of Contents (16 chapters)
1
Section 1: Getting Started with Optimus
4
Section 2: Optimus – Transform and Rollout
10
Section 3: Advanced Features of Optimus

Summary

Loading and saving are the most used operations when wrangling data. Optimus creates a flow that can assist in creating connections to data sources that can be reused for loading and saving data. Optimus also implements the most used file storage technologies such as Amazon S3 and Google Cloud Storage, and database connections such as PostgreSQL and MySQL, so that the user can have all the necessary tools at hand to make their work easier.

In terms of databases, we looked at the drivers that are required for every engine/database technology to save and load data from databases.

We also explored how to optimize dataframe memory usage – a very important step if you are handling big data since you could save as much as 50% of your memory space.

In the next chapter, we will start exploring some basic methods for filtering, deduplicating, and transforming data for further analysis.