Spark runs in three different modes:
- Standalone
- YARN
- MESOS
For initial deployments (and for learning), it is best to start with standalone, which allocates clusters specifically for Spark use only. Furthermore, you can run standalone mode in local mode (your computer) or via cloud computing (an example would be using Amazon AWS).
Cluster computing allows Spark to process and distribute data over many computers at once, in parallel. A cluster manager allocates resources for the cluster depending upon user requests. An important aspect of Spark is that it attempts to keep as much data in memory as needed, so that data is available for the various analyses as quickly as possible rather than having to wait to retrieve data from storage every time a query or model is specified.
Spark data is stored as RDDs, which allow different kinds of objects to be spread out over the cluster.