Book Image

Scalable Data Architecture with Java

By : Sinchan Banerjee
Book Image

Scalable Data Architecture with Java

By: Sinchan Banerjee

Overview of this book

Java architectural patterns and tools help architects to build reliable, scalable, and secure data engineering solutions that collect, manipulate, and publish data. This book will help you make the most of the architecting data solutions available with clear and actionable advice from an expert. You’ll start with an overview of data architecture, exploring responsibilities of a Java data architect, and learning about various data formats, data storage, databases, and data application platforms as well as how to choose them. Next, you’ll understand how to architect a batch and real-time data processing pipeline. You’ll also get to grips with the various Java data processing patterns, before progressing to data security and governance. The later chapters will show you how to publish Data as a Service and how you can architect it. Finally, you’ll focus on how to evaluate and recommend an architecture by developing performance benchmarks, estimations, and various decision metrics. By the end of this book, you’ll be able to successfully orchestrate data architecture solutions using Java and related technologies as well as to evaluate and present the most suitable solution to your clients.
Table of Contents (19 chapters)
1
Section 1 – Foundation of Data Systems
5
Section 2 – Building Data Processing Pipelines
11
Section 3 – Enabling Data as a Service
14
Section 4 – Choosing Suitable Data Architecture

Summary

In this chapter, we discussed how to analyze a real-time data engineering problem, identify the streaming platform, and considered the basic characteristics that our solution must have to become an effective real-time solution. First, we learned how to choose a hybrid platform to suit legal needs as well as performance and cost-effectiveness.

Then, we learned how to use our conclusions from our problem analysis to build a robust, reliable, and effective real-time data engineering solution. After that, we learned how to install and run Apache Kafka on our local machine and create topics in that Kafka cluster. We also learned how to develop a Kafka Streams application to do stream processing and write the result to an output topic. Then, we learned how to unit test a Kafka Streams application to make the code more robust and defect-free. After that, we learned how to set up a MongoDB Atlas instance on the AWS cloud. Finally, we learned about Kafka Connect and how to configure...