Making AI explainable and trustworthy
AA: Something you’ve discussed is this notion of explainability in the use of ML. Particularly in finance, how important do you think explainability is, and how do we actually achieve it? How do we also ensure there’s enough explainability, especially for fund managers and others who want to have a better understanding? There are also requirements from risk, auditing, and regulatory aspects – how do we meet these needs?
IH: I have changed my view of explainability a few times in my career. Initially, my interest in ML methods started with various non-parametric Bayesian statistics, such as maximum entropy. Maximum entropy methods are essentially considered part of ML these days. They're something very flexible that can fit any data, essentially.
I believe that one of the challenges of financial models is appropriately fitting the market data. If you have non-parametric models, it’s easy to fit the data. The...