Use cases
Having understood the data ecosystem and its constituent elements, let's finally look at some practical use cases that could lead an organization to start thinking in terms of data rather than processes.
Use case 1 – Security
Until a few years ago, the best way to combat external cyber security threats was to create a series of firewalls that were assumed to be impenetrable and thereby provide security to the systems behind the firewall. To combat internal cyber attacks, anti-virus software was considered to be more than sufficient. This traditional defense gave a sense of security, but was more of an illusion than a reality. Typical software system attackers are well versed in hiding in plain sight and, consequently, looking for "known bad" signatures didn't help in combating Advanced Persistent Threats (aka APT). As systems developed in complexity, the attack patterns also became sophisticated, with coordinated hacking efforts persisting over a long period and exploiting every aspect of the vulnerable system.
For example, a use case within the security domain is the Detection of Anomaly within the generated machine data, where the data is explored to identify any non-homogeneous event or transaction in a seemingly homogeneous set of events. An example of anomaly detection is when banks perform sophisticated transformations and context association with incoming credit card transactions to identify whether a transaction looks suspicious. Banks do it to prevent fraudsters from looting the bank, either directly or indirectly.
Organizations responded by creating hunting teams that looked at various data (for example, system logs, network packets, and firewall access logs) with a view to doing the following:
- Hunting for undetected intrusions/breaches
- Detecting anomalies and raising alerts in connection with any malicious activity
The main challenges for organizations in terms of creating these hunting teams were the following:
- The fact that data is scattered throughout the organization's IT landscape
- Data quality issues and multiple data versioning issues
- Access and contractual limitations
All these requirements and challenges created the need for a platform that can support various data formats and a platform that is capable of:
- Long-term data retention
- Correlating different data sources
- Providing fast access to correlated data
- Real-time analysis
Use case 2 – Modem data collection
XYZ is a large Telecom giant that provides modems to its clients for the purpose of high-speed internet access. The company purchases these modems from four different vendors and then distributes them under its brand. It has a good customer base and distributes in the region of 1 million modems across a vast geographic area. This may sound all well and good for the business, but the company receives around 100 complaints daily, by phone, about the modem not working. To handle these customer complaints and provide efficient after-sales service, the company must employ 25 customer engagement staff on a full-time basis. Every call from the customer lasts around five minutes. This results in a total of (5 min * 100 calls) = 500 minutes dedicated to solving modem complaints every day. In addition to this, every third call results in the recall of a modem and sending a replacement to the customer, all at the company's expense.
The company has further identified that almost 90% of the returned modems work properly and, hence, the actual root of the problem is not modems malfunctioning, but rather faulty or incorrect setup.
All told, handling calls and replacing non-faulty modems is costing the company 1 million euros annually.
It has now decided to take a more proactive approach to solving the issue so that it can detect whether the problem is at the modem level or with the actual setup of the modem. To do this, it has planned to collect anonymous data from each modem every second, analyzing it on certain baseline conditions, and creating alerts if there is a significant deviation from the norm.
Each modem sends around 1 kilobyte of data every second. With one million modems, out there, this results in 1 KB * 1,000,000 = 1,000,000 KB = 1 GB/sec.
Thus, in a day, the company needs to collect 1 GB * 60 sec * 60 min * 24 Hours = 86.4 TB of data.
This is a huge amount and, in order to collect such a huge amount of data, the company needs a platform that is not only capable of fast ingestion, but also quick real-time analysis. Thus, it decides to build a platform that can handle such data intensity and volumes.