-
Book Overview & Buying
-
Table Of Contents
Programming MapReduce with Scalding
By :
To achieve modularity and fulfill the single responsibility principle, we can structure our data processing job in an organized way. An object, a trait, and a job can share parts of the responsibilities, explained as follows:
In package object, we can store information about the schema of the data
In trait, we can store all the external operations
In a Scalding job, we can manage arguments, define taps, and use external operations to construct data processing pipelines
A particular dataset will usually be processed by multiple jobs to extract different value from the data. Thus, we can create an object called LogsSchemas to store input and output schemas, and also to document the locations in HDFS, where the data resides. This object can act as a registry of all the variations of datasets, and we can reuse it in any of our Scalding jobs, as shown in the following code:
package object LogsSchemas {
// that is, hdfs:///logs/raw/YYYY/MM/DD/
val LOG_SCHEMA = List...
Change the font size
Change margin width
Change background colour