8.8 Further reading
The following reading list will offer a greater understanding of some of the topics we touched on in this chapter:
Benchmarking neural network robustness to common corruptions and perturbations, Dan Hendrycks and Thomas Dietterich, 2019: this is the paper that introduced the image quality perturbations to benchmark model robustness, which we saw in the robustness case study.
Can You Trust Your Model’s Uncertainty? Evaluating predictive Uncertainty Under Dataset Shift, Yaniv Ovadia, Emily Fertig et al., 2019: this comparison paper uses image quality perturbations to introduce artificial dataset shift at different severity levels and measures how different deep neural networks respond to dataset shift in terms of accuracy and calibration.
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks, Dan Hendrycks and Kevin Gimpel, 2016: this fundamental out-of-distribution detection paper introduces the concept and shows that softmax...