Book Image

Architecting Data-Intensive Applications

By : Anuj Kumar
Book Image

Architecting Data-Intensive Applications

By: Anuj Kumar

Overview of this book

<p>Are you an architect or a developer who looks at your own applications gingerly while browsing through Facebook and applauding it silently for its data-intensive, yet ?uent and efficient, behaviour? This book is your gateway to build smart data-intensive systems by incorporating the core data-intensive architectural principles, patterns, and techniques directly into your application architecture.</p> <p>This book starts by taking you through the primary design challenges involved with architecting data-intensive applications. You will learn how to implement data curation and data dissemination, depending on the volume of your data. You will then implement your application architecture one step at a time. You will get to grips with implementing the correct message delivery protocols and creating a data layer that doesn’t fail when running high traffic. This book will show you how you can divide your application into layers, each of which adheres to the single responsibility principle. By the end of this book, you will learn to streamline your thoughts and make the right choice in terms of technologies and architectural principles based on the problem at hand.</p>
Table of Contents (18 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

What are Hadoop and HDFS


Hadoop Distributed File System (HDFS) is a block-enabled distributed filesystem designed to store huge amounts of data in a reliable manner on a cluster of computers with commodity hardware. HDFS works by creating another virtual layer on top of the normal filesystem that spans multiple computers in the cluster. It stores data/files containing data by collecting data into coarser-grained blocks (for example, 128 MB). Since Hadoop is meant to handle huge amounts of data, the HDFS file size is meant to club big files into a single chunk and thus HDFS performs better when the files are large as it can fetch the entire data in one call to the cluster. During storage time, HDFS partitions the large files into blocks and distribute these large blocks across all the nodes of the cluster. This enables parallel aggregate read operations that perform at a very high speed and efficiency. Multiple copies of these blocks are also stored to enable reliability and fault-tolerance...