Teams tend to have their own coding habits. If a project fails because of the quality of the code, try and work out which code metrics would have stopped the code from reaching production or which mistakes are seen repeatedly; a few examples include the following:
Friday afternoon code failure: We are all human and have secondary agendas. By the end of the week, a programmer may have their minds focused elsewhere than the code. A small subset of programmers have their code quality affected, consistently injecting more defects towards the tail end of their roster. Consider scheduling a weekly Jenkins Job that has harsher thresholds for quality metrics pushing back near the time of least attention.
Code churn: A warning for experienced quality assurers is the sudden surge in code commits just before a product is moved from an acceptance environment to production. This indicates that there is a last-minute rush. For some teams with a strong sense of code quality, this is also a sign of extra vigilance. For other less-disciplined teams, this could be a naive push towards destruction. If a project fails and QA is overwhelmed due to a surge of code changes, then look at setting up a warning Jenkins Job based on the commit velocity. If necessary, you can display your own custom metrics.
See the Plotting alternative code metrics in Jenkins recipe in Chapter 3, Building Software.
A rogue coder: Not all code bashers create code of the same uniform and high quality. It is possible that there is consistent underachievement within a project. Rogue coders are caught by human code review. However, for a secondary defense, consider setting thresholds on static code review reports from FindBugs and PMD. If a particular developer is not following the accepted practice, then builds will fail with great regularity.
See the Finding bugs with Findbugs recipe in Chapter 5, Using Metrics to Improve Quality.
The GUI does not make sense: Isn't it painful when you build a web application only to be told at the last moment that the GUI does not quite interact in the way the product owner expected? One solution is to write a mockup in Fitnesse, and surround it with automatic-functional tests using fixtures. When the GUI diverges from the planned workflow, Jenkins will start shouting.
See the Activating the Fitnesse HTMLUnit Fixtures recipe in Chapter 6, Testing Remotely.
Tracking responsibility: Mistakes are made and lessons need to be learned. However, if there is no clear chain of documented responsibility, then it is difficult to pin down who needs the learning opportunity. One approach is to structure the workflow in Jenkins through a series of connected jobs, and use the promoted builds plugin to make sure the right group verifies at the right point. This methodology is also good for reminding the team of the short-term tasks.
See the Testing and then promoting recipe in Chapter 7, Exploring Plugins.