Understanding the basics of MapReduce could well be a long-term solution if one doesn't have a cluster or uses Message Passing Interface (MPI). However, a more realistic use case is when the data doesn't fit on one disk but fits on a Distributed File System (DFS), or already lives on Hadoop-related software.
Moreover, MapReduce is a programming model that works in a distributed fashion, but it is not the only one that does. It might be illuminating to describe other programming models, for example, MPI and Bulk Synchronous Parallel (BSP). To process Big Data with tools such as R and several machine learning techniques requires a high-configuration machine, but that's not the permanent solution. So, distributed processing is the key to handling this data. This distributed computation can be implemented with the MapReduce programming model.
MapReduce is the one that answers the Big Data question. Logically, to process data we need parallel processing, which...