Book Image

Architecting Data-Intensive Applications

By : Anuj Kumar
Book Image

Architecting Data-Intensive Applications

By: Anuj Kumar

Overview of this book

<p>Are you an architect or a developer who looks at your own applications gingerly while browsing through Facebook and applauding it silently for its data-intensive, yet ?uent and efficient, behaviour? This book is your gateway to build smart data-intensive systems by incorporating the core data-intensive architectural principles, patterns, and techniques directly into your application architecture.</p> <p>This book starts by taking you through the primary design challenges involved with architecting data-intensive applications. You will learn how to implement data curation and data dissemination, depending on the volume of your data. You will then implement your application architecture one step at a time. You will get to grips with implementing the correct message delivery protocols and creating a data layer that doesn’t fail when running high traffic. This book will show you how you can divide your application into layers, each of which adheres to the single responsibility principle. By the end of this book, you will learn to streamline your thoughts and make the right choice in terms of technologies and architectural principles based on the problem at hand.</p>
Table of Contents (18 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Apache Sqoop


Apache Sqoop is a tool that has the ability to transfer data between Hadoop and structured data stores, such as a relational database. Thus, it may seem at first that Sqoop is not really a data-collection tool, but I wanted to provide details of it anyway because organizations have use cases where all they want to do is bring (also known as collect) data from their legacy data-warehouse-based relational tables into a distributed store, such as Hadoop. Thus, the discussion of Sqoop brings completes this chapter, in my opinion.

One of the primary places Apache Sqoop really shines is use cases where there is a need to perform expensive ETL processes on large amounts of data, and the enterprise data warehouse can't handle such a memory- and process-consuming task. In such cases, it makes sense to offload the execution to a distributed processing platform, such as Hadoop, and Sqoop fits nicely in this use case as the tool to transfer data between EDW and Hadoop:

The primary use cases...