Book Image

Modern Data Architectures with Python

By : Brian Lipp
3 (1)
Book Image

Modern Data Architectures with Python

3 (1)
By: Brian Lipp

Overview of this book

Modern Data Architectures with Python will teach you how to seamlessly incorporate your machine learning and data science work streams into your open data platforms. You’ll learn how to take your data and create open lakehouses that work with any technology using tried-and-true techniques, including the medallion architecture and Delta Lake. Starting with the fundamentals, this book will help you build pipelines on Databricks, an open data platform, using SQL and Python. You’ll gain an understanding of notebooks and applications written in Python using standard software engineering tools such as git, pre-commit, Jenkins, and Github. Next, you’ll delve into streaming and batch-based data processing using Apache Spark and Confluent Kafka. As you advance, you’ll learn how to deploy your resources using infrastructure as code and how to automate your workflows and code development. Since any data platform's ability to handle and work with AI and ML is a vital component, you’ll also explore the basics of ML and how to work with modern MLOps tooling. Finally, you’ll get hands-on experience with Apache Spark, one of the key data technologies in today’s market. By the end of this book, you’ll have amassed a wealth of practical and theoretical knowledge to build, manage, orchestrate, and architect your data ecosystems.
Table of Contents (19 chapters)
1
Part 1:Fundamental Data Knowledge
4
Part 2: Data Engineering Toolset
8
Part 3:Modernizing the Data Platform
13
Part 4:Hands-on Project

Preface

Hello! Data platforms are popping up everywhere, but only some cars in the shop are the same. We are at the dawn of seeing most data stored not in company-owned data centers but, instead, in the cloud. Cloud storage is exceptionally cheap, and this abundance of cheap storage drives our choices. Cloud storage is cheap, and cloud processing is often significantly more affordable than adequately housing computers in a data center. With this increase in cheap, flexible cloud capability comes the flexibility to have elasticity – the ability to grow and shrink as needed. Virtual compute engines do not run directly on physical machines but, instead, run in abstractions called containers, allowing for temporary use. You no longer need to pay for expensive deep-learning hardware. The cloud can give you quick access at a fraction of the cost.

The next step in this evolution was putting together stacks of technology that played well into what was called a data platform. This was often riddled with incompatible technologies being forced to work together, many times requiring duct tape to get everything to work together. As time went on, a better choice appeared.

With the advent of open technologies to process data such as Apache Spark, we started to see a different path altogether. People began to ask fundamental questions.

What types of data does your platform fully support? It became increasingly important that your data platform equally supports semi-structured and structured data. What kinds of analysis and ML does your platform support? We started wanting to create, train, and deploy AI and ML on our data platforms using modern tooling stacks. The analysis must be available in various languages and tooling options, not just a traditional JDBC SQL path. How well does it support streaming? Streaming data has become more and more the norm in many companies. With it comes a significant jump in complexity. A system built to process, store, and work with streaming platforms is critical for many. Is your platform using only open standards? Open standards might seem like an afterthought, but being able to swap out aged technologies without the forced lift and shift migrations can be a significant cost saver. Open standards allow for various technologies to work together without any effort, which is a stark contrast to many closed data systems. This book will serve as a guide into all the questions and show you have to work with data platforms efficiently.