We have learnt in great detail about image filtering operations (box and Gaussian filtering). We have also seen that, in general, image averaging tends to blur our input image. Let us now stop and ponder why (and in what scenario) we would need to perform such averaging operations. What prompted the need to replace each pixel with an average (or a weighted average of its neighbors)? The answer to these questions lies in the concept of image noise.
Images are nothing but two-dimensional signals (mapping a pair of x and y coordinate values to corresponding pixel intensities) and just like any signal, they are susceptible to noise. When we say that an image is noisy, we mean that there is a small or large variation in the intensity values of the pixels from the ideal value that we would expect. Noise in an image creeps in due to defects in digital cameras or photographic film.
The following image demonstrates some examples of noisy photographs:
There are two different types of noise...