When negative numbers are needed, that is, whole numbers smaller than 0, we specify them with the signed keyword. So, a signed integer would be specified as signed int. The natural use for signed integers is when we want to express a direction relative to zero, either larger or smaller. By default, and without any extra specifiers, integers are signed.
A signed integer uses one of the bits to indicate whether the remaining bits represent a positive or negative number. Typically, this is the most significant bit; the least significant bit is that which represents the value 1. As with positive whole numbers, a signed integer has the same number of values, but the range is shifted so that half of the values are below 0, or, algebraically speaking, to the left of 0. For instance, a single signed byte has 256 possible values but their range is -128 to 127. Remember to count 0 as one of the possible values. Hence the apparent...