In the previous recipes, we saw various steps of performing data analysis. In this recipe, let's download the Uber dataset and try to solve some of the analytical questions that arise on such data.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
In this section, let's see how to analyse the Uber dataset.
Let's download the
Uber
dataset from the following location: https://github.com/ChitturiPadma/datasets/blob/master/uber.csv.The dataset contains four columns:
dispatching_base_number
,date
,active_vehicles
, andtrips
. Let's load the data and see what the...