piml.models.XGB1Regressor

class piml.models.XGB1Regressor(n_estimators=100, eta=0.3, refit_method='glm', tree_method='auto', max_bin=256, reg_lambda=1, reg_alpha=0, gamma=0, feature_names=None, feature_types=None, min_bin_size=0.01, max_bin_size=1.0, mono_increasing_list=(), mono_decreasing_list=(), random_state=0)

Depth-1 XGBoostRegressor with optimal binning.

Parameters:
n_estimatorsint, default=100

Number of gradient boosted trees.

etafloat, default=0.3

Boosting learning rate.

refit_method{“glm”, “xgb”}, default=”glm”

The method for refit the overall model using the optimized bins.

tree_method{“exact”, “hist”, “approx”, “gpu_hist”, “auto”}, default=”auto”

Specify which tree method to use.

max_binint, default=20

If using histogram-based algorithm, maximum number of bins per feature.

reg_alphafloat, default=0

L1 regularization term on weights.

reg_lambdafloat, default=0

L2 regularization term on weights.

gammafloat, default=0

Minimum loss reduction required to make a further partition on a leaf node of the tree.

feature_nameslist or None, default=None

The list of feature names.

feature_typeslist or None, default=None

The list of feature types. Available types include “numerical” and “categorical”.

min_bin_sizefloat, default=0.01

The fraction of minimum number of records for each bin.

max_bin_sizefloat, default=1.0

The fraction of maximum number of records for each bin.

mono_increasing_listtuple of str, default=()

The feature name tuple subject to monotonic increasing constraint.

mono_decreasing_listtuple of str, default=()

The feature name tuple subject to monotonic decreasing constraint.

random_stateint, default=0

The random seed.

Attributes:
n_features_in_int

The number of input features.

is_fitted_bool

Indicator of whether the model is fitted.

feature_names_list of str

The feature name list of all input features.

feature_types_list of str

The feature type list of all input features.

min_value_np.ndarray of shape (n_features, )

The min values of input features (obtained from training data).

max_value_np.ndarray of shape (n_features, )

The max values of input features (obtained from training data).

split_info_: dict

The split points per feature.

n_splits_raw_int

The total number of splits in the raw XGB model.

n_splits_int

The total number of splits.

xgb_params_dict

The parameter dict of XGB model.

effects_: dict

The main effects of the final functional ANOVA model.

intercept_int

The overall intercept of the final functional ANOVA model.

Methods

fit(X, y[, sample_weight])

Fit the model.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

get_raw_output(X)

Returns numpy array of raw predictions.

parse_model()

Interpret the model using functional ANOVA.

partial_dependence(fidx, X)

Partial dependence of given effect index.

predict(X)

Returns numpy array of predicted class.

score(X, y[, sample_weight])

Return the coefficient of determination of the prediction.

set_params(**params)

Set the parameters of this estimator.

set_score_request(*[, sample_weight])

Request metadata passed to the score method.

fit(X, y, sample_weight=None)

Fit the model.

Parameters:
Xnp.ndarray of shape (n_samples, n_features)

Data features.

ynp.ndarray of shape (n_samples, )

Target response.

sample_weightnp.ndarray of shape (n_samples, )

Sample weight.

Returns:
selfobject

Fitted Estimator.

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_raw_output(X)

Returns numpy array of raw predictions.

Parameters:
Xnp.ndarray of shape (n_samples, n_features)

Data features.

Returns:
prednp.ndarray of shape (n_samples, )

numpy array of raw predictions.

parse_model()

Interpret the model using functional ANOVA.

Returns:
An instance of FANOVAInterpreter

The interpretation results.

partial_dependence(fidx, X)

Partial dependence of given effect index.

Parameters:
fidxtuple of int

The main effect feature index.

Xnp.ndarray of shape (n_samples, n_features)

Data features.

Returns:
prednp.ndarray of shape (n_samples, )

numpy array of predicted class values.

predict(X)

Returns numpy array of predicted class.

Parameters:
Xnp.ndarray of shape (n_samples, n_features)

Data features

Returns:
prednp.ndarray of shape (n_samples, )

numpy array of predicted class values.

score(X, y, sample_weight=None)

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters:
Xarray-like of shape (n_samples, n_features)

Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

yarray-like of shape (n_samples,) or (n_samples, n_outputs)

True values for X.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
scorefloat

\(R^2\) of self.predict(X) w.r.t. y.

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_score_request(*, sample_weight: Union[bool, None, str] = '$UNCHANGED$') XGB1Regressor

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns:
selfobject

The updated object.

Examples using piml.models.XGB1Regressor

XGB-1 Regression (California Housing)

XGB-1 Regression (California Housing)