Book Image

Learning SciPy for Numerical and Scientific Computing Second Edition

Book Image

Learning SciPy for Numerical and Scientific Computing Second Edition

Overview of this book

Table of Contents (15 chapters)
Learning SciPy for Numerical and Scientific Computing Second Edition
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Distances


In the field of data mining, it is often required to determine which members of a training set are closest to unknown test instances. It is imperative to have a good set of different distance functions for any of the algorithms that perform the search, and SciPy has, for this purpose, a huge collection of optimally coded functions in the distance submodule of the scipy.spatial module. The list is long. Besides Euclidean, squared Euclidean, or standardized Euclidean, we have many more—Bray-Curtis, Canberra, Chebyshev, Manhattan, correlation distance, cosine distance, dice dissimilarity, Hamming, Jaccard-Needham, Kulsinski, Mahalanobis, and so on. The syntax in most cases is simple:

distance_function(first_vector, second_vector)

The only three cases in which the syntax is different are the Minkowski, Mahalanobis, and standardized Euclidean distances, in which the distance function requires either an integer number (for the order of the norm in the definition of Minkowski distance)...