Book Image

Reproducible Data Science with Pachyderm

By : Svetlana Karslioglu
Book Image

Reproducible Data Science with Pachyderm

By: Svetlana Karslioglu

Overview of this book

Pachyderm is an open source project that enables data scientists to run reproducible data pipelines and scale them to an enterprise level. This book will teach you how to implement Pachyderm to create collaborative data science workflows and reproduce your ML experiments at scale. You’ll begin your journey by exploring the importance of data reproducibility and comparing different data science platforms. Next, you’ll explore how Pachyderm fits into the picture and its significance, followed by learning how to install Pachyderm locally on your computer or a cloud platform of your choice. You’ll then discover the architectural components and Pachyderm's main pipeline principles and concepts. The book demonstrates how to use Pachyderm components to create your first data pipeline and advances to cover common operations involving data, such as uploading data to and from Pachyderm to create more complex pipelines. Based on what you've learned, you'll develop an end-to-end ML workflow, before trying out the hyperparameter tuning technique and the different supported Pachyderm language clients. Finally, you’ll learn how to use a SaaS version of Pachyderm with Pachyderm Notebooks. By the end of this book, you will learn all aspects of running your data pipelines in Pachyderm and manage them on a day-to-day basis.
Table of Contents (16 chapters)
1
Section 1: Introduction to Pachyderm and Reproducible Data Science
5
Section 2:Getting Started with Pachyderm
12
Section 3:Pachyderm Clients and Tools

Summary

In this chapter, we have discussed a number of important concepts that help define why reproducibility is important and why it should be a part of a successful data science process.

We've learned that data science models are used to analyze historical data as input with a target goal to calculate the most probable and most successful result. We've established that replication, the ability to reproduce the results of a scientific experiment, is one of the fundamental principles of good research and that it is one of the best ways to ensure that your team is doing everything to reduce bias in your models. Bias can creep into a calculation from misrepresentation in a training dataset. Often, this reflects historical and social realities and norms accepted in society. Another way to reduce bias in your training data is to have a diverse team that includes representatives of all genders, races, and backgrounds.

We've learned that data dredging, or fishing, is an unethical technique used by some data scientists to prove a predefined hypothesis by cherry-picking the results of an experiment and only selecting the results that prove the desired outcome and ignoring any inconvenient trends.

We've also learned about the MLOps methodology, a lifecycle of a machine learning application, similar in its principle to the DevOps software lifecycle technique. MLOps includes the following main phases: planning, development, training, validation, deployment, and monitoring. All of the phases are continuously repeated, creating a feedback loop that ensures seamless experiment management from planning through development and testing to production and post-production phases.

We've also reviewed some of the most important aspects of ethical AI, a discipline of data science that focuses on ethical aspects of artificial intelligence, robotics, and data science. A failure to implement an ethical AI process in your organization might lead to undesirable legal consequences if deployed production models are found to be discriminatory.

In the next chapter, we will learn about the main concepts of the Pachyderm version-control system, which can help you address many of the issues described in this chapter.