Tests can fool us with failures when the code actually works fine (false negatives) and successes when the code is broken (false positives). There is no way to completely eliminate false results, but we can reduce their likelihood by keeping tests simple and doing sanity checks. Sometimes our tests might seem like they are not doing enough, or we may seem to be checking for redundant things that we expect never to break. That is simply the price we have to pay to minimize the chance that our tests are fooling us.
Let's cover three useful tactics to reduce false results:
Sanity checks
Tests for the opposite case
Increased specificity of assertions
Detecting false results is a fundamental concern of testing, so the tests all throughout this chapter (and the rest of the book) include measures to reduce false results. For example, let's go back to three before
hooks in three different tests that we defined in Chapter 3, Taking Control of State with Doubles...