This recipe shows how to implement co-variance using Pandas DataFrames over Spark.
To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. Also, have Python and IPython installed on the Linux machine, that is, Ubuntu 14.04.
Invoke
ipython console -profile=pyspark
.Computing correlation and co-variance using Pandas in PySpark:
In [1]: from pyspark import SparkConf, SparkContext, SQLContext In [2]: import pandas as pd In [3]: seq=pd.Series([1,2,3,4,4,3,2,1], ['2006','2007','2008','2009','2010','2011','2012','2013']) In [4]: seq2 = pd.Series([3,4,3,4,5,4,3,2], ['2006','2007','2008','2009','2010','2011','2012','2013']) In [5]: seq.corr(seq2) Out[5]: 0.77459666924148329 In [6]: seq.cov(seq2) ...