Book Image

Apache Flume: Distributed Log Collection for Hadoop

By : Steven Hoffman
Book Image

Apache Flume: Distributed Log Collection for Hadoop

By: Steven Hoffman

Overview of this book

Table of Contents (16 chapters)
Apache Flume: Distributed Log Collection for Hadoop Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Spooling Directory Source


In an effort to avoid all the assumptions inherent in tailing a file, a new source was devised to keep track of which files have been converted into Flume events and which still need to be processed. The SpoolingDirectorySource is given a directory to watch for new files appearing. It is assumed that files copied to this directory are complete. Otherwise, the source might try and send a partial file. It also assumes that filenames never change. Otherwise, on restarting, the source would forget which files have been sent and which have not. The filename condition can be met in log4j using DailyRollingFileAppender rather than RollingFileAppender. However, the currently open file would need to be written to one directory and copied to the spool directory after being closed. None of the log4j appenders shipping have this capability.

That said, if you are using the Linux logrotate program in your environment, this might be of interest. You can move completed files to...