Book Image

Reproducible Data Science with Pachyderm

By : Svetlana Karslioglu
Book Image

Reproducible Data Science with Pachyderm

By: Svetlana Karslioglu

Overview of this book

Pachyderm is an open source project that enables data scientists to run reproducible data pipelines and scale them to an enterprise level. This book will teach you how to implement Pachyderm to create collaborative data science workflows and reproduce your ML experiments at scale. You’ll begin your journey by exploring the importance of data reproducibility and comparing different data science platforms. Next, you’ll explore how Pachyderm fits into the picture and its significance, followed by learning how to install Pachyderm locally on your computer or a cloud platform of your choice. You’ll then discover the architectural components and Pachyderm's main pipeline principles and concepts. The book demonstrates how to use Pachyderm components to create your first data pipeline and advances to cover common operations involving data, such as uploading data to and from Pachyderm to create more complex pipelines. Based on what you've learned, you'll develop an end-to-end ML workflow, before trying out the hyperparameter tuning technique and the different supported Pachyderm language clients. Finally, you’ll learn how to use a SaaS version of Pachyderm with Pachyderm Notebooks. By the end of this book, you will learn all aspects of running your data pipelines in Pachyderm and manage them on a day-to-day basis.
Table of Contents (16 chapters)
1
Section 1: Introduction to Pachyderm and Reproducible Data Science
5
Section 2:Getting Started with Pachyderm
12
Section 3:Pachyderm Clients and Tools

Understanding inputs

We described inputs in Chapter 2, Pachyderm Basics, in detail by providing examples. Therefore, in this section, we'll just mention that inputs define the type of your pipeline. You can specify the following types of Pachyderm inputs:

  • PFS is a generic parameter that defines a standard pipeline and inputs in all multi-input pipelines.
  • Cross is an input that creates a cross-product of the datums from two input repositories. The resulting output will include all possible combinations of all datums from the input repositories.
  • Union is an input that adds datums from one repository to the datums in another repository.
  • Join is an input that matches datums with a specific naming pattern.
  • Spout is an input that consumes data from a third-party source and adds it to the Pachyderm filesystem for further processing.
  • Group is an input that combines datums from multiple repositories based on a configured naming pattern.
  • Cron is a pipeline...