6.5 Summary
In this chapter, we’ve seen how familiar machine learning and deep learning concepts can be used to develop models with predictive uncertainties. We’ve also seen how, with relatively minor modifications, we can add uncertain estimates to pre-trained models. This means we can go beyond the point-estimate approach of standard NNs: using uncertainties to gain valuable insights into the performance of our models, and allowing us to develop more robust applications.
However, as with the methods introduced in Chapter 5, Principled Approaches for Bayesian Deep Learning, all techniques have advantages and disadvantages. For example, last-layer methods may give us the flexibility to add uncertainties to any model, but they’re limited by the representation that the model has already learned. This could result in very low variance outputs, resulting in an overconfident model. Similarly, while ensemble methods allow us to capture variance across every layer...