Book Image

Architecting Data-Intensive Applications

By : Anuj Kumar
Book Image

Architecting Data-Intensive Applications

By: Anuj Kumar

Overview of this book

<p>Are you an architect or a developer who looks at your own applications gingerly while browsing through Facebook and applauding it silently for its data-intensive, yet ?uent and efficient, behaviour? This book is your gateway to build smart data-intensive systems by incorporating the core data-intensive architectural principles, patterns, and techniques directly into your application architecture.</p> <p>This book starts by taking you through the primary design challenges involved with architecting data-intensive applications. You will learn how to implement data curation and data dissemination, depending on the volume of your data. You will then implement your application architecture one step at a time. You will get to grips with implementing the correct message delivery protocols and creating a data layer that doesn’t fail when running high traffic. This book will show you how you can divide your application into layers, each of which adheres to the single responsibility principle. By the end of this book, you will learn to streamline your thoughts and make the right choice in terms of technologies and architectural principles based on the problem at hand.</p>
Table of Contents (18 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Chapter 1. Exploring the Data Ecosystem

In God we trust. All others must bring data.                                                                                                        —W. Edwards Deming                                                                  https://deming.org/theman/overview

Until a few years ago, a successful organization was one that had access to superior technology, which, in most cases, was either proprietary to the organization or was acquired at great expense. These technologies enabled organizations to define complex business process flows, addressing specific use cases that helped them to generate revenue. In short, technology drove the business. Data did not constitute any part of the decision-making process. With such an approach, organizations could only utilize a part of their data. This resulted in lost opportunities and, in some cases, unsatisfied customers. One of the reasons for these missed opportunities was the fact that there was no reliable and economical way to store such huge quantities of data, especially when organizations didn't know how to make business out of it. Hardware costs were a prohibitive factor.

Things started to change a few years ago when Google published its white paper on GFS (https://static.googleusercontent.com/media/research.google.com/en/archive/gfs-sosp2003.pdf), which was picked up by Doug Cutting, who created the open source distributed file system, called Apache Hadoop, capable of storing large volumes of data using commodity hardware.

Suddenly, organizations, both big and small, realized its potential and started storing any piece of data in Hadoop that had the potential to turn itself into a source of revenue later. The industry coined a term for such a huge, raw store of data, calling it a data lake.

The Wikipedia definition of a data lake is as follows:

"A data lake is a method of storing data within a system or repository, in its natural format, that facilitates the collocation of data in various schemata and structural forms, usually object blobs or files."

 

In short, a data lake is a collection of various pieces of data that may, or may not, be important to an organization. The key phrase here is natural format. What this means is that the data that is collected is seldom processed prior to being stored. The reasoning behind this is that any processing may potentially lead to a loss of information which, in turn, may have an effect in terms of generating new sources of revenue for the organization. This does not mean that we do not process the data while in flight. What it does mean is that at least one copy of the data is stored in the manner in which it was received from the external system.

But how do organizations fill this data lake and what data should be stored there? The answer to this question lies in understanding the data ecosystem that exists today. Understanding where the data originates from, and which data to persist, helps organizations to become data-driven instead of process-driven. This ability helps organizations to not only explore new business opportunities, but also helps them to react more quickly in the face of an ever-changing business landscape.

In this introductory chapter, we will:

  • Try to understand what we mean by a data ecosystem
  • Try to understand the characteristics of a data ecosystem, in other words, what constitutes a data ecosystem
  • Talk about some data and information sharing standards and frameworks, such as the traffic light protocol and the information exchange policy framework
  • Continue our exploration of the 3V's of the data ecosystem
  • Conclude with a couple of use cases to prove our point

I hope that this chapter will pique your interest and get you excited about exploring the rest of the book.