In data science, examples are at the core of learning from data processes. If unusual, inconsistent, or erroneous data is fed into the learning process, the resulting model may be unable to correctly generalize the accommodating of any new data. An unusually high value present in a variable may not only skew descriptive measures such as the mean and variance, but it may also distort how many algorithms learn from data, thus exposing them to unusual values and expecting unusual responses from them.

When a data point deviates markedly from the others in a sample, it is called an *outlier*. Any other expected observation is labeled as an *inlier*.

A point may be an outlier due to the following three general causes (each one implies different remedies):

The point represents a rare occurrence, but it is yet a possible value, given the available data of the data distribution is just a sample. In such an occurrence, the generative underlying process is the same...