Book Image

Data Processing with Optimus

By : Dr. Argenis Leon, Luis Aguirre
Book Image

Data Processing with Optimus

By: Dr. Argenis Leon, Luis Aguirre

Overview of this book

Optimus is a Python library that works as a unified API for data cleaning, processing, and merging data. It can be used for handling small and big data on your local laptop or on remote clusters using CPUs or GPUs. The book begins by covering the internals of Optimus and how it works in tandem with the existing technologies to serve your data processing needs. You'll then learn how to use Optimus for loading and saving data from text data formats such as CSV and JSON files, exploring binary files such as Excel, and for columnar data processing with Parquet, Avro, and OCR. Next, you'll get to grips with the profiler and its data types - a unique feature of Optimus Dataframe that assists with data quality. You'll see how to use the plots available in Optimus such as histogram, frequency charts, and scatter and box plots, and understand how Optimus lets you connect to libraries such as Plotly and Altair. You'll also delve into advanced applications such as feature engineering, machine learning, cross-validation, and natural language processing functions and explore the advancements in Optimus. Finally, you'll learn how to create data cleaning and transformation functions and add a hypothetical new data processing engine with Optimus. By the end of this book, you'll be able to improve your data science workflow with Optimus easily.
Table of Contents (16 chapters)
1
Section 1: Getting Started with Optimus
4
Section 2: Optimus – Transform and Rollout
10
Section 3: Advanced Features of Optimus

Saving a dataframe

Saving on Optimus can be done by simply calling any of the methods available on the save accessor of a dataframe instance. In this section, we'll learn how to save to a local or remote filesystem, and also to a previously established database or remote storage connection.

When saving data in files, it is important to understand which format to use so that you can gain speed when reading or processing. There is plenty of information available about how to select the correct date format. I like the following for simplicity:

"Finding the right file format for your particular dataset can be tough. In general, if the data is wide, has a large number of attributes, and is write-heavy, then a row-based approach may be best. If the data is narrower, has a fewer number of attributes, and is read-heavy, then a column-based approach may be best."

(Datanami, https://www.datanami.com/2018/05/16/big-data-file-formats-demystified/)

Saving to a local...