A question many project teams I have been part of couldn't answer is how much testing we should do. Is it enough if our tests cover 80% of our lines of code? Should it be higher than that?
Line coverage is a bad metric to measure test success. Any goal other than 100% is completely meaningless because important parts of the codebase might not be covered at all. And even at 100%, we still can't be sure that every bug has been squashed.
I suggest measuring test success by how comfortable we feel to ship the software. If we trust the tests enough to ship after having executed them, we are good. The more often we ship, the more trust we can have in our tests. If we only ship twice a year, no one will trust the tests because they will only prove themselves twice a year.
This requires a leap of faith the first couple of times we ship, but if we make it a priority to fix and learn from bugs in production, we are on the right track.
For each production bug, we should...