Checking adversarial robustness
In the previous section, we discussed the importance of anticipating and monitoring drifts for any production-level ML system. Usually, this type of monitoring is done after the model has been deployed in production. But even before the model is deployed in production, it is extremely critical to check for the adversarial robustness of the model.
Most ML models are prone to adversarial attacks or an injection of noise to the input data, causing the model to fail by making incorrect predictions. The degree of adversarial attacks increases with the model's complexity, as complex models are very sensitive to noisy data samples. So, checking for adversarial robustness is about evaluating how sensitive the trained model is toward adversarial attacks.
In this section, first, we will try to understand the impact of adversarial attacks on the model and why this is important in the context of XAI. Then, we will discuss certain techniques that we...