This recipe shows how to filter, slice, sort, index, and group Pandas DataFrames as well as Spark DataFrames.
To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. Also, have Python and IPython installed on the Linux machine, that is, Ubuntu 14.04.
Invoke
ipython console -profile=pyspark
as follows:In [4]: from pyspark import SparkConf, SparkContext, SQLContext In [5]: import pandas as pd
Creating a Pandas DataFrame as follows:
In [6]: pdf = pd.DataFrame({'Name':['Padma','Major','Priya'], 'Age': [23,45,30]})
Creating a Spark DataFrame from a Pandas DataFrame as follows:
In [7]: sqlc=SQLContext(sc) In [8]: spark_df = sqlc.createDataFrame(pdf)
Splitting a Pandas DataFrame into...