Understanding the basics of XAI
AI systems extract patterns from input data and derive perceptions based on trained knowledge. The increased use of AI systems and applications in our everyday lives has prompted a growing demand for interpretability and accountability to justify model outputs for broader AI adoption. Unlike traditional human-made, rule-based systems that are self-explanatory, many deep learning algorithms are inherently opaque and overly complex for human interpretability.
XAI is a multidisciplinary field that spans psychology, computer science, and engineering, as pictured in Figure 2.1. Applying algorithmic, psychology, and cognitive science concepts, XAI aims to provide explainable AI decisions, allowing users without ML backgrounds to comprehend model behavior.
Figure 2.1 – XAI as a multidisciplinary field
DARPA’s initial focus was to evaluate XAI in two problem areas – that is, data analytics and autonomy. They...