-
Book Overview & Buying
-
Table Of Contents
Apache Spark for Data Science Cookbook
By :
In this recipe, we'll see how to apply feature engineering on the explored data.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, and Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
After data exploration, the next step is to perform feature engineering. Let's try to convert nominal variables into numeric types. Here is the code which does encoding for the nominal variables:
/* Applying One Hot encoding of Categorical Variables */
// One Hot Encoding for Job
val sqlFunc = udf(coder)
val new_Df_WithDummyJob =
create_DummyVariables(selected_Data, sqlFunc,"job...
Change the font size
Change margin width
Change background colour