Book Image

Architecting Data-Intensive Applications

By : Anuj Kumar
Book Image

Architecting Data-Intensive Applications

By: Anuj Kumar

Overview of this book

<p>Are you an architect or a developer who looks at your own applications gingerly while browsing through Facebook and applauding it silently for its data-intensive, yet ?uent and efficient, behaviour? This book is your gateway to build smart data-intensive systems by incorporating the core data-intensive architectural principles, patterns, and techniques directly into your application architecture.</p> <p>This book starts by taking you through the primary design challenges involved with architecting data-intensive applications. You will learn how to implement data curation and data dissemination, depending on the volume of your data. You will then implement your application architecture one step at a time. You will get to grips with implementing the correct message delivery protocols and creating a data layer that doesn’t fail when running high traffic. This book will show you how you can divide your application into layers, each of which adheres to the single responsibility principle. By the end of this book, you will learn to streamline your thoughts and make the right choice in terms of technologies and architectural principles based on the problem at hand.</p>
Table of Contents (18 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Introducing Hadoop, the Big Elephant


Doug Cutting created Hadoop. Doug Cutting is also the guy who created Apache Lucene. Apache Hadoop is a software library that was designed and implemented for storing and processing large datasets. When I say large, I mean massive amounts that cannot fit on a single box and thus need to be partitioned and distributed across multiple physical boxes – transparently to the user who still should see the data as one big massive chunk – logically. Hadoop’s design is based on Google’s Filesystem and is meant to scale from a single-node deployment to thousands of nodes, each of which offers local computation and storage capabilities.

Hadoop does not depend on Hardware for reliability. Instead, the software itself is capable of detecting and handling failures at the node level and thus becoming capable of delivering a Highly-Available Service on top of a cluster of computers.

Hadoop consists of three functional modules:

  • Hadoop Distributed Filesystem: Distributed...