Book Image

Modern Data Architectures with Python

By : Brian Lipp
3 (1)
Book Image

Modern Data Architectures with Python

3 (1)
By: Brian Lipp

Overview of this book

Modern Data Architectures with Python will teach you how to seamlessly incorporate your machine learning and data science work streams into your open data platforms. You’ll learn how to take your data and create open lakehouses that work with any technology using tried-and-true techniques, including the medallion architecture and Delta Lake. Starting with the fundamentals, this book will help you build pipelines on Databricks, an open data platform, using SQL and Python. You’ll gain an understanding of notebooks and applications written in Python using standard software engineering tools such as git, pre-commit, Jenkins, and Github. Next, you’ll delve into streaming and batch-based data processing using Apache Spark and Confluent Kafka. As you advance, you’ll learn how to deploy your resources using infrastructure as code and how to automate your workflows and code development. Since any data platform's ability to handle and work with AI and ML is a vital component, you’ll also explore the basics of ML and how to work with modern MLOps tooling. Finally, you’ll get hands-on experience with Apache Spark, one of the key data technologies in today’s market. By the end of this book, you’ll have amassed a wealth of practical and theoretical knowledge to build, manage, orchestrate, and architect your data ecosystems.
Table of Contents (19 chapters)
1
Part 1:Fundamental Data Knowledge
4
Part 2: Data Engineering Toolset
8
Part 3:Modernizing the Data Platform
13
Part 4:Hands-on Project

What this book covers

Chapter 1, Modern Data Processing Architecture, provides a significant introduction to designing data architecture and understanding the types of data processing engines.

Chapter 2, Understanding Data Analytics, provides an overview of the world of data analytics and modeling for various data types.

Chapter 3, Apache Spark Deep Dive, provides a thorough understanding of how Apache Spark works and the background knowledge needed to write Spark code.

Chapter 4, Batch and Stream Processing with Apache Spark, provides a solid foundation to work with Spark for batch workloads and structured streaming data pipelines.

Chapter 5, Streaming Data with Kafka, provides a hands-on introduction to Kafka and its uses in data pipelines, including Kafka Connect and Apache Spark.

Chapter 6, MLOps , provides an engineer with all the needed background and hands-on knowledge to develop, train, and deploy ML/AI models using the latest tooling.

Chapter 7, Data and Information Visualization, explains how to develop ad hoc data visualization and common dashboards in your data platform.

Chapter 8, Integrating Continuous Integration into Your Workflow, delves deep into how to build Python applications in a CI workflow using GitHub, Jenkins, and Databricks.

Chapter 9, Orchestrating Your Data Workflows, gives practical hands-on experience with Databricks workflows that transfer to other orchestration tools.

Chapter 10, Data Governance, explores controlling access to data and dealing with data quality issues.

Chapter 11, Building Out the Ground Work, establishes a foundation for our project using GitHub, Python, Terraform, and PyPi among others.

Chapter 12, Completing Our Project, completes our project, building out GitHub actions, Pre-commit, design diagrams, and lots of Python.