piml.models.ReluDNNRegressor

class piml.models.ReluDNNRegressor(feature_names=None, feature_types=None, hidden_layer_sizes=(40, 40), dropout_prob=0.0, max_epochs=1000, learning_rate=0.001, batch_size=500, batch_size_inference=10000, l1_reg=1e-05, val_ratio=0.2, n_epoch_no_change=20, iht=False, phase_epochs=50, threshold=0.1, device='cpu', verbose=False, random_state=0)

Multi-layer perceptron regressor with ReLU activation function.

Parameters:
feature_nameslist or None, default=None

The list of feature names.

feature_typeslist or None, default=None

The list of feature types. Available types include “numerical” and “categorical”.

hidden_layer_sizestuple of int, default=(40, 40)

A list of hidden layer sizes.

dropout_probfloat, default=0.0

Dropout probability.

max_epochsint, default=1000

Number of training epochs.

learning_ratefloat, default=0.001

learning rate for model training.

batch_sizeint, default=500

Batch size for training.

batch_size_inferenceint, default=10000

The batch size used in the inference stage. It is imposed to avoid out-of-memory issue when dealing very large dataset.

l1_regfloat, default=1e-5

lambda parameter for L1 Regularization.

val_ratiofloat, default=0.2

validation ratio for early stopping.

n_epoch_no_changeint, default=20

Stops training is loss doesn’t improve for last n_epoch_no_change epochs. This is required when early_stop is True.

ihtbool, default=False

Whether to perform IHT (Iterative Hard Thresholding) or not.

phase_epochsint, default=50

No of phase 1 and phase 2 epochs for IHT, required when IHT is True.

thresholdint, default=0.1

Threshold value for performing IHT, required when IHT is True.

devicestr

Computational device: cuda or cpu.

verbosebool, default=False

Whether to display training statistics (loss) or not.

random_stateint, default=0

Determines random number generation for weights and bias initialization.

Attributes:
feature_names_list of str

The feature name list of all input features.

feature_types_list of str

The feature type list of all input features.

n_features_in_int

The number of input features.

coefs_list of shape (len(hidden_layer_sizes) + 1,)

The ith element in the list represents the weight matrix corresponding to layer i.

intercepts_list of shape (len(hidden_layer_sizes) + 1,)

The ith element in the list represents the bias vector corresponding to layer i + 1.

net_object

The internal Pytorch network object.

no_improved_count_int

The count of no improvement epochs.

train_epoch_loss_list of float

The training loss over each epoch.

valid_epoch_loss_list of float

The validation loss over each epoch.

is_fitted_bool

The fitting status of the estimator.

Methods

fit(X, y[, sample_weight])

Fit ReluDNN model.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

get_raw_output(X)

Returns numpy array of raw predicted value before softmax.

parse_model()

Interpret the model using Aletheia Unwrapper.

predict(X)

Predict function.

score(X, y[, sample_weight])

Return the coefficient of determination of the prediction.

set_params(**params)

Set the parameters of this estimator.

set_score_request(*[, sample_weight])

Request metadata passed to the score method.

fit(X, y, sample_weight=None)

Fit ReluDNN model.

Parameters:
Xnp.ndarray of shape (n_samples, n_features)

Data features.

ynp.ndarray of shape (n_samples, )

Target response.

sample_weightnp.ndarray of shape (n_samples, ), default=None

Sample weight.

Returns:
selfobject

Fitted Estimator.

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_raw_output(X)

Returns numpy array of raw predicted value before softmax.

Parameters:
Xnp.ndarray of shape (n_samples, n_features)

Data features.

Returns:
prednp.ndarray of shape (n_samples, )

The raw predicted value.

parse_model()

Interpret the model using Aletheia Unwrapper.

Returns:
An instance of ReLUDNNInterpreterRegressor

The interpretation results.

predict(X)

Predict function.

Parameters:
Xnp.ndarray of shape (n_samples, n_features)

Data features.

Returns:
prednp.ndarray of shape (n_samples, )

The predicted value.

score(X, y, sample_weight=None)

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters:
Xarray-like of shape (n_samples, n_features)

Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

yarray-like of shape (n_samples,) or (n_samples, n_outputs)

True values for X.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
scorefloat

\(R^2\) of self.predict(X) w.r.t. y.

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_score_request(*, sample_weight: Union[bool, None, str] = '$UNCHANGED$') ReluDNNRegressor

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns:
selfobject

The updated object.

Examples using piml.models.ReluDNNRegressor

Accumulated Local Effects

Accumulated Local Effects

ReLU DNN Regression (Friedman)

ReLU DNN Regression (Friedman)