-
Book Overview & Buying
-
Table Of Contents
Apache Flume: Distributed Log Collection for Hadoop
Codecs (coder/decoders) are used to compress and decompress data using various compression algorithms. gzip, bzip2, lzo, and snappy are supported by Flume, although you may have to install lzo yourself, especially if you are using a distribution such as CDH due to licensing issues.
If you want to specify compression for your data, you set the hdfs.codeC property if you want the HDFS sink to write compressed files. The property is also used as the file suffix for the files written to HDFS. For example, if you specify the codec as follows all files written will have a .gzip extension, so you don't need to specify a
hdfs.fileSuffix property in this case:
agent.sinks.k1.hdfs.codeC=gzip
Which codec you choose to use will require some research on your part. There are arguments for using gzip or bzip2 for their higher compression ratios at the cost of longer compression times, especially if your data is written once but will be read hundreds or thousands of times. On the other...
Change the font size
Change margin width
Change background colour