In this recipe, we'll see how to apply feature engineering on the explored data.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
After data exploration, the next step is to perform feature engineering. Let's try to apply feature engineering and make the data ready for analysis.
From the available attributes, we can see that the minimum value of
Item_Visibility
is 0. We can consider this missing information and replace with mean values as follows:// Replace Missing values for Item_Visibility val df_Item_VisibilityNull = replaced_MissingValues_ForOutletSize...