In all of the thresholding operations that we have seen so far, the threshold value remained the same for all pixels in the image. However, most of the images that you would come across follow the principle of spatial locality. What this essentially means is that the intensity of a pixel is affected by a small neighborhood in space around that pixel's location and is relatively independent of pixels that are not in its immediate vicinity. When you come to think of it, it does make intuitive sense. Pixels make up objects in images, and these objects are well-separated in the spatial coordinate frame of the image. So, what we are trying to say is that the pixels which constitute the same object (or in more general terms, the same region in an image) will show a greater degree of similarity in their intensity values than those which are a part of totally different objects (or regions).
How does this concept of spatial locality fit into our discourse on adaptive thresholding...