Book Image

Architecting Data-Intensive Applications

By : Anuj Kumar
Book Image

Architecting Data-Intensive Applications

By: Anuj Kumar

Overview of this book

<p>Are you an architect or a developer who looks at your own applications gingerly while browsing through Facebook and applauding it silently for its data-intensive, yet ?uent and efficient, behaviour? This book is your gateway to build smart data-intensive systems by incorporating the core data-intensive architectural principles, patterns, and techniques directly into your application architecture.</p> <p>This book starts by taking you through the primary design challenges involved with architecting data-intensive applications. You will learn how to implement data curation and data dissemination, depending on the volume of your data. You will then implement your application architecture one step at a time. You will get to grips with implementing the correct message delivery protocols and creating a data layer that doesn’t fail when running high traffic. This book will show you how you can divide your application into layers, each of which adheres to the single responsibility principle. By the end of this book, you will learn to streamline your thoughts and make the right choice in terms of technologies and architectural principles based on the problem at hand.</p>
Table of Contents (18 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Use cases


Having understood the data ecosystem and its constituent elements, let's finally look at some practical use cases that could lead an organization to start thinking in terms of data rather than processes.

 

Use case 1 – Security

Until a few years ago, the best way to combat external cyber security threats was to create a series of firewalls that were assumed to be impenetrable and thereby provide security to the systems behind the firewall. To combat internal cyber attacks, anti-virus software was considered to be more than sufficient. This traditional defense gave a sense of security, but was more of an illusion than a reality. Typical software system attackers are well versed in hiding in plain sight and, consequently, looking for "known bad" signatures didn't help in combating Advanced Persistent Threats (aka APT). As systems developed in complexity, the attack patterns also became sophisticated, with coordinated hacking efforts persisting over a long period and exploiting every aspect of the vulnerable system. 

For example, a use case within the security domain is the Detection of Anomaly within the generated machine data, where the data is explored to identify any non-homogeneous event or transaction in a seemingly homogeneous set of events. An example of anomaly detection is when banks perform sophisticated transformations and context association with incoming credit card transactions to identify whether a transaction looks suspicious. Banks do it to prevent fraudsters from looting the bank, either directly or indirectly.

Organizations responded by creating hunting teams that looked at various data (for example, system logs, network packets, and firewall access logs) with a view to doing the following:

  • Hunting for undetected intrusions/breaches
  • Detecting anomalies and raising alerts in connection with any malicious activity

The main challenges for organizations in terms of creating these hunting teams were the following:

  • The fact that data is scattered throughout the organization's IT landscape
  • Data quality issues and multiple data versioning issues
  • Access and contractual limitations

All these requirements and challenges created the need for a platform that can support various data formats and a platform that is capable of:

  • Long-term data retention
  • Correlating different data sources
  • Providing fast access to correlated data
  • Real-time analysis

 

 

 

Use case 2 – Modem data collection

XYZ is a large Telecom giant that provides modems to its clients for the purpose of high-speed internet access. The company purchases these modems from four different vendors and then distributes them under its brand. It has a good customer base and distributes in the region of 1 million modems across a vast geographic area. This may sound all well and good for the business, but the company receives around 100 complaints daily, by phone, about the modem not working. To handle these customer complaints and provide efficient after-sales service, the company must employ 25 customer engagement staff on a full-time basis. Every call from the customer lasts around five minutes. This results in a total of (5 min * 100 calls) = 500 minutes dedicated to solving modem complaints every day. In addition to this, every third call results in the recall of a modem and sending a replacement to the customer, all at the company's expense.

The company has further identified that almost 90% of the returned modems work properly and, hence, the actual root of the problem is not modems malfunctioning, but rather faulty or incorrect setup.

All told, handling calls and replacing non-faulty modems is costing the company 1 million euros annually.

It has now decided to take a more proactive approach to solving the issue so that it can detect whether the problem is at the modem level or with the actual setup of the modem. To do this, it has planned to collect anonymous data from each modem every second, analyzing it on certain baseline conditions, and creating alerts if there is a significant deviation from the norm.

Each modem sends around 1 kilobyte of data every second. With one million modems, out there, this results in 1 KB * 1,000,000 = 1,000,000 KB = 1 GB/sec.

Thus, in a day, the company needs to collect 1 GB * 60 sec * 60 min * 24 Hours = 86.4 TB of data.

This is a huge amount and, in order to collect such a huge amount of data, the company needs a platform that is not only capable of fast ingestion, but also quick real-time analysis. Thus, it decides to build a platform that can handle such data intensity and volumes.