logo
  • Install
  • User Guide
  • API
  • Examples
logo
  • HPO - Random Search

Note

Go to the end to download the full example code or to run this example in your browser via Binder

HPO - Random Search¶

Experiment initialization and data preparation

import scipy
from piml import Experiment
from piml.models import GLMClassifier

exp = Experiment()
exp.data_loader("SimuCredit", silent=True)
exp.data_summary(feature_exclude=["Race", "Gender"], silent=True)
exp.data_prepare(target="Approved", task_type="classification", silent=True)

Train Model

exp.model_train(model=GLMClassifier(), name="GLM")

Define hyperparameter search space for grid search

parameters = {'l1_regularization': scipy.stats.uniform(0, 0.1),
              'l2_regularization': scipy.stats.uniform(0, 0.1)}

Tune hyperparameters of registered models

result = exp.model_tune("GLM", method="randomized", parameters=parameters, n_runs=100,
                        metric="AUC", test_ratio=0.2)
result.data
Rank(by AUC) AUC time
params
{'l1_regularization': 0.020239411195815772, 'l2_regularization': 0.002493911719174813} 1 0.729472 0.058955
{'l1_regularization': 0.0801649627572255, 'l2_regularization': 0.0009505206902733599} 2 0.729471 0.049783
{'l1_regularization': 0.07054458515086691, 'l2_regularization': 0.002443422808146689} 3 0.729470 0.070307
{'l1_regularization': 0.0200826243066503, 'l2_regularization': 0.0038925653368613645} 4 0.729469 0.054850
{'l1_regularization': 0.08763141285690618, 'l2_regularization': 0.0011317924903120004} 5 0.729469 0.038847
... ... ... ...
{'l1_regularization': 0.014018374389893852, 'l2_regularization': 0.08885523820070576} 96 0.729274 0.046059
{'l1_regularization': 0.06720040139301987, 'l2_regularization': 0.08585578494124796} 97 0.729273 0.041242
{'l1_regularization': 0.05105133900697683, 'l2_regularization': 0.0845254996471495} 97 0.729273 0.047853
{'l1_regularization': 0.05346229516211951, 'l2_regularization': 0.09880860499665617} 99 0.729267 0.046578
{'l1_regularization': 0.09599608264816151, 'l2_regularization': 0.09690348478707304} 100 0.729255 0.044768

100 rows × 3 columns



Refit model using a selected hyperparameter

params = result.get_params_ranks(rank=1)
exp.model_train(GLMClassifier(**params), name="GLM-HPO-RandSearch")

Compare the default model and HPO refitted model

exp.model_diagnose("GLM", show="accuracy_table")
           ACC     AUC      F1 LogLoss   Brier

Train   0.6722  0.7309  0.6965  0.6047  0.2088
Test    0.6690  0.7318  0.6976  0.6073  0.2095
Gap    -0.0032  0.0009  0.0011  0.0026  0.0008

Compare the default model and HPO refitted model

exp.model_diagnose("GLM-HPO-RandSearch", show="accuracy_table")
           ACC     AUC      F1 LogLoss   Brier

Train   0.6721  0.7309  0.6964  0.6047  0.2088
Test    0.6690  0.7318  0.6976  0.6073  0.2095
Gap    -0.0031  0.0009  0.0012  0.0026  0.0008

Total running time of the script: ( 1 minutes 24.760 seconds)

Estimated memory usage: 15 MB

Launch binder

Download Python source code: plot_1_hpo_random.py

Download Jupyter notebook: plot_1_hpo_random.ipynb

Gallery generated by Sphinx-Gallery

© Copyright 2022-, PiML-Toolbox authors. Show this page source