Book Image

Driving Data Quality with Data Contracts

By : Andrew Jones
Book Image

Driving Data Quality with Data Contracts

By: Andrew Jones

Overview of this book

Despite the passage of time and the evolution of technology and architecture, the challenges we face in building data platforms persist. Our data often remains unreliable, lacks trust, and fails to deliver the promised value. With Driving Data Quality with Data Contracts, you’ll discover the potential of data contracts to transform how you build your data platforms, finally overcoming these enduring problems. You’ll learn how establishing contracts as the interface allows you to explicitly assign responsibility and accountability of the data to those who know it best—the data generators—and give them the autonomy to generate and manage data as required. The book will show you how data contracts ensure that consumers get quality data with clearly defined expectations, enabling them to build on that data with confidence to deliver valuable analytics, performant ML models, and trusted data-driven products. By the end of this book, you’ll have gained a comprehensive understanding of how data contracts can revolutionize your organization’s data culture and provide a competitive advantage by unlocking the real value within your data.
Table of Contents (16 chapters)
1
Part 1: Why Data Contracts?
4
Part 2: Driving Data Culture Change with Data Contracts
8
Part 3: Designing and Implementing a Data Architecture Based on Data Contracts

Assigning responsibility and accountability

Now that we have defined the roles, we need to specify the responsibilities and accountabilities of each role. This ensures that everyone knows what is expected of them and allows them to work most effectively together.

We’ll start with the data generators. As we discussed in the previous section, many of them didn’t realize they were data generators. Therefore, those responsibilities were taken by the data engineering team who built the pipelines that extracted the raw data from upstream services.

This data engineering team became accountable for the reliability of the data, even though they were not involved in how it was generated or how the structure of the data evolved. This team was very reactive to upstream changes and did their best to try and limit the impact of those changes. However, there is only so much they can do, and there’s no quick fix you can deploy if the generator upstream suddenly stops writing...