When there are very large amounts of numeric data that need to be processed in some situations, it may not be possible to process it all in feasible time. In such situations, the data could be sampled in order to reduce the total amount of data for further processing, which then speeds up the whole program. In other situations, this might be done not to reduce the amount of work for processing but for saving or transferring the data.
A naive idea of sampling could be to only pick every Nth data point. This might be fine in a lot of cases, but in signal processing, for example, it could lead to a mathematical phenomenon called aliasing. If the distance between every sample is varied by a small random offset, aliasing can be reduced. Have a look at the following diagram, which shows an extreme case just to illustrate the point--while the original signal consists of a sine wave, the triangle points on the graph are sampling points that are sampled at exactly every 100th...