.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples\5_compare\plot_1_compare_fairness.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_5_compare_plot_1_compare_fairness.py: Fairness Comparison ======================================== .. GENERATED FROM PYTHON SOURCE LINES 8-9 Experiment initialization and data preparation .. GENERATED FROM PYTHON SOURCE LINES 9-17 .. code-block:: default from piml import Experiment from piml.models import GLMClassifier, ExplainableBoostingClassifier exp = Experiment() exp.data_loader("SimuCredit", silent=True) exp.data_summary(feature_exclude=["Race", "Gender"], silent=True) exp.data_prepare(target="Approved", task_type="classification", silent=True) .. GENERATED FROM PYTHON SOURCE LINES 18-19 Train Model .. GENERATED FROM PYTHON SOURCE LINES 19-22 .. code-block:: default exp.model_train(GLMClassifier(), name="GLM") exp.model_train(ExplainableBoostingClassifier(), name="EBM") .. GENERATED FROM PYTHON SOURCE LINES 23-24 Fairness Metric .. GENERATED FROM PYTHON SOURCE LINES 24-35 .. code-block:: default metrics_result = exp.model_fairness_compare(models=["GLM", "EBM"], show="metrics", metric="AIR", group_category=["Race", "Gender"], reference_group=[1., 1.], protected_group=[0., 0.], favorable_threshold=0.5, return_data=True, figsize=(6, 4)) metrics_result.data .. image-sg:: /auto_examples/5_compare/images/sphx_glr_plot_1_compare_fairness_001.png :alt: Adverse Impact Ratio :srcset: /auto_examples/5_compare/images/sphx_glr_plot_1_compare_fairness_001.png :class: sphx-glr-single-img .. raw:: html
Group Index Group Category Reference Group Protected Group GLM_AIR EBM_AIR
0 0 Race 1.0 0.0 0.7124 0.6453
1 1 Gender 1.0 0.0 0.8326 0.7819


.. GENERATED FROM PYTHON SOURCE LINES 36-37 Fairness Segmented .. GENERATED FROM PYTHON SOURCE LINES 37-49 .. code-block:: default segmented_result = exp.model_fairness_compare(models=["GLM", "EBM"], show="segmented", metric="AIR", segment_feature="Balance", group_category=["Race", "Gender"], reference_group=[1., 1.], protected_group=[0., 0.], favorable_threshold=0.5, segment_bins=5, return_data=True, figsize=(8, 4)) segmented_result.data .. image-sg:: /auto_examples/5_compare/images/sphx_glr_plot_1_compare_fairness_002.png :alt: Group: 0 (Race), Group: 1 (Gender) :srcset: /auto_examples/5_compare/images/sphx_glr_plot_1_compare_fairness_002.png :class: sphx-glr-single-img .. raw:: html
Segment Lower Bound Upper Bound Group Index GLM_AIR EBM_AIR
0 0 0.97 306.61 0 0.8251 0.4405
1 0 0.97 306.61 1 0.9017 0.6953
2 1 306.62 601.40 0 0.6900 0.5457
3 1 306.62 601.40 1 0.7452 0.6506
4 2 601.47 1027.23 0 0.6338 0.6963
5 2 601.47 1027.23 1 0.5918 0.7080
6 3 1027.29 1864.90 0 0.6135 0.7131
7 3 1027.29 1864.90 1 0.6028 0.8209
8 4 1864.94 20384.87 0 0.7490 0.8734
9 4 1864.94 20384.87 1 0.7147 0.8212


.. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 57.140 seconds) **Estimated memory usage:** 21 MB .. _sphx_glr_download_auto_examples_5_compare_plot_1_compare_fairness.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/selfexplainml/piml-toolbox/main?urlpath=lab/tree/./docs/_build/html/notebooks/auto_examples/5_compare/plot_1_compare_fairness.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_1_compare_fairness.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_1_compare_fairness.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_