Processing stored data on AWS
There are several services for processing the data stored in AWS. We will go through AWS Batch and AWS EMR (Elastic MapReduce) in this section. EMR is a product from AWS that primarily runs MapReduce jobs and Spark applications in a managed way. AWS Batch is used for long-running, compute-heavy workloads.
EMR is a managed implementation of Apache Hadoop provided as a service by AWS. It includes other components of the Hadoop ecosystem, such as Spark, HBase, Flink, Presto, Hive, Pig, and many more. We will not cover these in detail for the certification exam:
- EMR clusters can be launched from the AWS console or via the AWS CLI with a specific number of nodes. The cluster can be a long-term cluster or an ad hoc cluster. If you have a long-running traditional cluster, then you have to configure the machines and manage them yourself. If you have jobs to be executed faster, then you need to manually add a cluster. In the case of EMR, these...