piml.models.GAMINetClassifier

class piml.models.GAMINetClassifier(feature_names=None, feature_types=None, interact_num=10, subnet_size_main_effect=(20,), subnet_size_interaction=(20, 20), activation_func='ReLU', max_epochs=(1000, 1000, 1000), learning_rates=(0.001, 0.001, 0.0001), early_stop_thres=('auto', 'auto', 'auto'), batch_size=1000, batch_size_inference=10000, max_iter_per_epoch=100, val_ratio=0.2, warm_start=True, gam_sample_size=5000, mlp_sample_size=1000, heredity=True, reg_clarity=0.1, loss_threshold=0.01, reg_mono=0.1, mono_increasing_list=(), mono_decreasing_list=(), mono_sample_size=1000, include_interaction_list=(), boundary_clip=True, normalize=True, verbose=False, n_jobs=10, device='cpu', random_state=0)

Generalized additive model with pairwise interaction classifier.

Parameters:
feature_nameslist or None, default=None

The list of feature names.

feature_typeslist or None, default=None

The list of feature types. Available types include “numerical” and “categorical”.

interact_numint, default=10

The max number of interactions to be included in the second stage training.

subnet_size_main_effecttuple of int, default=(20, )

The hidden layer architecture of each subnetwork in the main effect block.

subnet_size_interactiontuple of int, default=(20, 20)

The hidden layer architecture of each subnetwork in the interaction block.

activation_func{“ReLU”, “Sigmoid”, “Tanh”}, default=”ReLU”

The name of the activation function.

max_epochstuple of int, default=(1000, 1000, 1000)

The max number of epochs in the first (main effect training), second (interaction training), and third (fine tuning) stages, respectively.

learning_ratestuple of float, default=(1e-3, 1e-3, 1e-4)

The initial learning rates of Adam optimizer in the first (main effect training), second (interaction training), and third (fine tuning) stages, respectively.

early_stop_threstuple of int or “auto”, default=[“auto”, “auto”, “auto”]

The early stopping threshold in the first (main effect training), second (interaction training), and third (fine tuning) stages, respectively. In auto mode, the value is set to max(5, min(5000 * n_features / (max_iter_per_epoch * batch_size), 100)).

batch_sizeint, default=1000

The batch size. Note that it should not be larger than the training size * (1 - validation ratio).

batch_size_inferenceint, default=10000

The batch size used in the inference stage. It is imposed to avoid out-of-memory issue when dealing very large dataset.

max_iter_per_epochint, default=100

The max number of iterations per epoch. In the init stage of model fit, its value will be clipped by min(max_iter_per_epoch, int(sample_size / batch_size)). For each epoch, the data would be reshuffled and only the first “max_iter_per_epoch” batches would be used for training. It is imposed to make the training scalable for very large dataset.

val_ratiofloat, default=0.2

The validation ratio, should be greater than 0 and smaller than 1.

warm_startbool, default=True

Initialize the network by fitting a rough B-spline based GAM model with tensor product interactions. The initialization is performed by, 1) fit B-spline GAM as teacher model, 2) generate random samples from the teacher model, 3) fit each subnetwork using the generated samples. And it is used for both main effect and interaction subnetwork initialization.

gam_sample_sizeint, default=5000

The sub-sample size for GAM fitting as warm_start=True.

mlp_sample_sizeint, default=1000

The generated sample size for individual subnetwork fitting as warm_start=True.

hereditybool, default=True

Whether to perform interaction screening subject to heredity constraint.

loss_thresholdfloat, default=0.01

The loss tolerance threshold for selecting fewer main effects or interactions, according to the validation performance. For instance, assume the best validation performance is achieved when using 10 main effects; if only use the top 5 main effects also gives similar validation performance, we could prune the last 5 by setting this parameter to be positive.

reg_clarityfloat, default=0.1

The regularization strength of marginal clarity constraint.

reg_monofloat, default=0.1

The regularization strength of monotonicity constraint.

mono_sample_sizeint, default=1000

As monotonicity constraint is used, we would generate some data points uniformly within the feature space per epoch, to impose the monotonicity regularization in addition to original training samples.

mono_increasing_listtuple of str, default=()

The feature name tuple subject to monotonic increasing constraint.

mono_decreasing_listtuple of str, default=()

The feature name tuple subject to monotonic decreasing constraint.

include_interaction_listtuple of (str, str), default=()

The tuple of interaction to be included for fitting, each interaction is expressed by (feature_name1, feature_name2).

boundary_clipbool, default=True

In the inference stage, whether to clip the feature values by their min and max values in the training data.

normalizebool, default=True

Whether to normalize the data before inputting to the network.

verbosebool, default=False

Whether to output the training logs.

n_jobsint, default=10

The number of cpu cores for parallel computing. -1 means all the available cpus will be used.

devicestring, default=”cpu”

The hardware device name used for training.

random_stateint, default=0

The random seed.

Attributes:
net_torch network object

The fitted GAMI-Net module.

data_dict_density_dict

The dict containing the marginal density of each input feature.

err_train_main_effect_training_list of float

The training loss history in the main effect fitting stage.

err_val_main_effect_training_list of float

The validation loss history in the main effect fitting stage.

err_train_interaction_training_list of float

The training loss history in the interaction fitting stage.

err_val_interaction_training_list of float

The validation loss history in the interaction fitting stage.

err_train_tuning_list of float

The training loss history in the fine-tuning stage.

err_val_tuning_list of float

The validation loss history in the fine-tuning stage.

interaction_list_list of tuples

The list of feature index pairs (tuple) for each fitted interaction.

active_main_effect_index_list of int

The selected main effect index.

active_interaction_index_list of int

The selected interaction index.

main_effect_val_loss_list of float

The validation loss as the most important main effects are sequentially added.

interaction_val_loss_list of float

The validation loss as the most important interactions are sequentially added.

time_cost_list of tuple

The time cost of each stage.

n_features_in_int

The number of input features.

clarity_bool

Indicator of whether marginal clarity regularization is turned on.

monotonicity_bool

Indicator of whether monotonicity regularization is turned on.

is_fitted_bool

Indicator of whether the model is fitted.

n_interactions_int

The actual number of interactions used in the fitting stage. It is greater or equal to the number of active interactions.

dummy_values_dict

The dict containing the categories of each categorical feature.

cfeature_num_int

The number of categorical features.

nfeature_num_int

The number of continuous features.

cfeature_names_list of str

The name list of categorical features.

nfeature_names_list of str

The name list of continuous features.

cfeature_index_list_list of int

The index list of categorical features.

nfeature_index_list_list of int

The index list of continuous features.

num_classes_list_list of int

The number of categories for each categorical feature.

mu_list_list of float

The average values of each feature calculated by training data. For categorical features, the average value is fixed as 0.

std_list_list of float

The standard deviations of each feature calculated by training data. For categorical features, the average value is fixed as 1.

feature_names_list of str

The feature name list of all input features.

feature_types_list of str

The feature type list of all input features.

min_value_torch.tensor

Containing the min values of input features (obtained from training data).

max_value_torch.tensor

Containing the max values of input features (obtained from training data).

mono_increasing_list_index_list of str

Monotonic increasing features’ name list.

mono_decreasing_list_index_list of str

Monotonic decreasing features’ name list.

include_interaction_list_index_list of str

The list of manually included interactions’ index tuples.

training_generator_FastTensorDataLoader

A loader for training set (the one excluding validation set).

validation_generator_FastTensorDataLoader

A loader for validation set.

warm_init_main_effect_data_dict

The dict containing the information for main effect warm initialization.

warm_init_interaction_data_dict

The dict containing the information for interaction warm initialization.

main_effect_norm_np.ndarray

The variance of each main effect output (calculated by training data).

interaction_norm_np.ndarray

The variance of each main effect output (calculated by training data).

feature_importance_np.ndarray

Normalized feature importance.

data_dict_global_: dict

The global interpretation results, which is generated using the self.global_explain() function.

Methods

certify_mono([n_samples])

Certify whether monotonicity constraint is satisfied.

decision_function(X[, main_effect, interaction])

Returns numpy array of raw predicted value before softmax.

fine_tune_selected(main_effect_list, ...[, ...])

Fine-tuning with some selected effects (unselected would be pruned).

fit(x, y[, sample_weight])

Fit GAMINetClassifier model.

get_clarity_loss(x[, sample_weight])

Returns clarity loss of given samples.

get_interaction_raw_output(x)

Returns numpy array of interactions' raw prediction.

get_main_effect_raw_output(x)

Returns numpy array of main effects' raw prediction.

get_metadata_routing()

Get metadata routing of this object.

get_mono_loss(x[, sample_weight])

Returns monotonicity loss of given samples.

get_params([deep])

Get parameters for this estimator.

get_raw_output(x[, main_effect, interaction])

Returns numpy array of raw prediction.

load([folder, name])

Load a model from local disk.

parse_model()

Interpret the model using functional ANOVA.

partial_dependence(fidx, X)

Partial dependence of given effect index.

partial_derivatives(feature_name[, n_samples])

Plot the first-order partial derivatives w.r.t.

predict(x[, main_effect, interaction])

Returns numpy array of predicted class.

predict_proba(x[, main_effect, interaction])

Returns numpy array of predicted probabilities of each class.

save([folder, name])

Save a model to local disk.

score(X, y[, sample_weight])

Return the mean accuracy on the given test data and labels.

set_params(**params)

Set the parameters of this estimator.

set_score_request(*[, sample_weight])

Request metadata passed to the score method.

certify_mono(n_samples=10000)

Certify whether monotonicity constraint is satisfied.

Parameters:
n_samplesint, default=10000

Size of random samples for certifying the monotonicity constraint.

Returns:
mono_statusbool

True means monotonicity constraint is satisfied.

decision_function(X, main_effect=True, interaction=True)

Returns numpy array of raw predicted value before softmax.

Parameters:
Xnp.ndarray of shape (n_samples, n_features)

Data features.

main_effectbool, default=True

Whether to include main effects.

interactionbool, default=True

Whether to include interactions.

Returns:
prednp.ndarray of shape (n_samples, )

numpy array of predicted class values.

fine_tune_selected(main_effect_list, interaction_list, max_epochs=1000, lr=0.001, early_stop_thres=5, verbose=False)

Fine-tuning with some selected effects (unselected would be pruned).

All the network parameters are updated together. Clarity regularization would be triggered and only penalize interaction subnetworks. Monotonic regularization would be imposed if self.mono_decreasing_list or self.mono_increasing_list are not empty. After training, the mean and norm of each effect would be updated, and the subnetworks are also centered.

fit(x, y, sample_weight=None)

Fit GAMINetClassifier model.

Parameters:
xnp.ndarray of shape (n_samples, n_features)

Data features.

ynp.ndarray of shape (n_samples, )

Target response.

sample_weightnp.ndarray of shape (n_samples, )

Sample weight.

Returns:
selfobject

Fitted Estimator.

get_clarity_loss(x, sample_weight=None)

Returns clarity loss of given samples.

Parameters:
xnp.ndarray of shape (n_samples, n_features)

Data features

sample_weightnp.ndarray of shape (n_samples, )

Sample weight.

Returns:
clarity_lossfloat

clarity loss.

get_interaction_raw_output(x)

Returns numpy array of interactions’ raw prediction.

Parameters:
xnp.ndarray of shape (n_samples, n_features)

Data features.

Returns:
prednp.ndarray of shape (n_samples, n_interactions)

numpy array of interactions’ raw prediction.

get_main_effect_raw_output(x)

Returns numpy array of main effects’ raw prediction.

Parameters:
xnp.ndarray of shape (n_samples, n_features)

Data features.

Returns:
prednp.ndarray of shape (n_samples, n_features)

numpy array of main effects’ raw prediction.

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_mono_loss(x, sample_weight=None)

Returns monotonicity loss of given samples.

Parameters:
xnp.ndarray of shape (n_samples, n_features)

Data features.

sample_weightnp.ndarray of shape (n_samples, ), default=None

Sample weight.

Returns:
mono_lossfloat

monotonicity loss.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_raw_output(x, main_effect=True, interaction=True)

Returns numpy array of raw prediction.

Parameters:
xnp.ndarray of shape (n_samples, n_features)

Data features.

main_effectbool, default=True

Whether to include main effects.

interactionbool, default=True

Whether to include interactions.

Returns:
prednp.ndarray of shape (n_samples, 1)

numpy array of raw prediction.

load(folder='./', name='demo')

Load a model from local disk.

Parameters:
folderstr, default=”./”

The path of folder.

namestr, default=”demo”

Name of the file.

parse_model()

Interpret the model using functional ANOVA.

Returns:
An instance of FANOVAInterpreter

The interpretation results.

partial_dependence(fidx, X)

Partial dependence of given effect index.

Parameters:
fidxtuple of int

The main effect or pairwise interaction feature index.

Xnp.ndarray of shape (n_samples, n_features)

Data features.

Returns:
prednp.ndarray of shape (n_samples, )

numpy array of predicted class values.

partial_derivatives(feature_name, n_samples=10000)

Plot the first-order partial derivatives w.r.t. given feature index.

Parameters:
feature_namestr

Feature name.

n_samplesint, default=10000

Size of random samples to plot the derivatives.

predict(x, main_effect=True, interaction=True)

Returns numpy array of predicted class.

Parameters:
xnp.ndarray of shape (n_samples, n_features)

Data features

main_effectbool, default=True

Whether to include main effects.

interactionbool, default=True

Whether to include interactions.

Returns:
prednp.ndarray of shape (n_samples, )

numpy array of predicted class values.

predict_proba(x, main_effect=True, interaction=True)

Returns numpy array of predicted probabilities of each class.

Parameters:
xnp.ndarray of shape (n_samples, n_features)

Data features.

main_effectbool, default=True

Whether to include main effects.

interactionbool

Whether to include interactions.

Returns:
pred_probanp.ndarray of shape (n_samples, 2)

numpy array of predicted proba values.

save(folder='./', name='demo')

Save a model to local disk.

Parameters:
folderstr, default=”./”

The path of folder.

namestr, default=”demo”

Name of the file.

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
Xarray-like of shape (n_samples, n_features)

Test samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
scorefloat

Mean accuracy of self.predict(X) w.r.t. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_score_request(*, sample_weight: Union[bool, None, str] = '$UNCHANGED$') GAMINetClassifier

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns:
selfobject

The updated object.

Examples using piml.models.GAMINetClassifier

Build Robust Models with Monotonicity Constraints

Build Robust Models with Monotonicity Constraints