EBM Classification (Taiwan Credit)

Experiment initialization and data preparation

from piml import Experiment
from piml.models import ExplainableBoostingClassifier

exp = Experiment()
exp.data_loader(data="TaiwanCredit", silent=True)
exp.data_summary(feature_exclude=["LIMIT_BAL", "SEX", "EDUCATION", "MARRIAGE", "AGE"], silent=True)
exp.data_prepare(target="FlagDefault", task_type="classification", silent=True)

Train Model

exp.model_train(model=ExplainableBoostingClassifier(interactions=10), name="EBM")

Evaluate predictive performance

exp.model_diagnose(model="EBM", show='accuracy_table')
          ACC      AUC      F1  LogLoss    Brier

Train  0.8218   0.7896  0.4778   0.4239   0.1327
Test   0.8272   0.7753  0.4792   0.4222   0.1312
Gap    0.0054  -0.0143  0.0013  -0.0017  -0.0016

Effect importance

exp.model_interpret(model="EBM", show="global_ei", figsize=(5, 4))
Effect Importance

Feature importance

exp.model_interpret(model="EBM", show="global_fi", figsize=(5, 4))
Feature Importance

Global effect plot

exp.model_interpret(model="EBM", show="global_effect_plot", uni_feature="PAY_1",
                    original_scale=True, figsize=(5, 4))
PAY_1 (51.7%)

Local interpretation by effect

exp.model_interpret(model="EBM", show="local_ei", sample_id=0, original_scale=True, figsize=(5, 4))
Predicted: 0.2236 | Actual: 0.0000

Local interpretation by feature

exp.model_interpret(model="EBM", show="local_fi", sample_id=0, original_scale=True, figsize=(5, 4))
Predicted: 0.2236 | Actual: 0.0000

Total running time of the script: ( 1 minutes 25.154 seconds)

Estimated memory usage: 34 MB

Gallery generated by Sphinx-Gallery