Understanding model explainability during concept drift/calibration
In the preceding section, we learned about different types of concept drift. Now, let us study how we can explain them with interpretable ML:
- First, we import the necessary packages for creating a regression model and the drift explainer library. The California Housing dataset has been used to explain concept drift:
from xgboost import XGBRegressor from cinnamon.drift import ModelDriftExplainer, AdversarialDriftExplainer from sklearn import datasets from sklearn.datasets import fetch_california_housing from sklearn.datasets import fetch_openml california = fetch_openml(name="house_prices", as_frame=True) california_df = pd.DataFrame(california.data, columns=california.feature_names) RANDOM_SEED = 2021
- Then, we train the XGBoost regressor model:
model = XGBRegressor(n_estimators=1000, booster="gbtree",objective="reg:squarederror", learning_rate=0.05,max_depth=6,seed=RANDOM_SEED...