Cohen's kappa measures the agreement between target and predicted class similar to accuracy, but it also takes into account random chance of getting the predictions. Cohen's kappa is given by the following equation:
In this equation, p0 is the relative observed agreement and pe is the random chance of agreement derived from the data. Kappa varies between negative values and one with the following rough categorization from Landis and Koch:
Poor agreement: kappa < 0
Slight agreement: kappa = 0 to 0.2
Fair agreement: kappa = 0.21 to 0.4
Moderate agreement: kappa = 0.41 to 0.6
Good agreement: kappa = 0.61 to 0.8
Very good agreement: kappa = 0.81 to 1.0
I know of two other schemes to grade kappa, so these numbers are not set in stone. I think we can agree not to accept kappa less than 0.2. The most appropriate use case is, of course, to rank models. There are other variations of Cohen's kappa, but as of November 2015, they were not implemented in scikit-learn...