The final piece of functionality required in your data processing pipeline is the ability to inspect and transform events in flight. This can be accomplished using interceptors. Interceptors, as we discussed in Chapter 1, Overview and Architecture, can be inserted after a source creates an event, but before writing to the channel occurs.
Apache Flume: Distributed Log Collection for Hadoop
By :
Apache Flume: Distributed Log Collection for Hadoop
By:
Overview of this book
Table of Contents (16 chapters)
Apache Flume: Distributed Log Collection for Hadoop Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Free Chapter
Overview and Architecture
A Quick Start Guide to Flume
Sinks and Sink Processors
Sources and Channel Selectors
Interceptors, ETL, and Routing
Putting It All Together
Monitoring Flume
There Is No Spoon – the Realities of Real-time Distributed Data Collection
Index
Customer Reviews