step_discretize_xgb
creates a specification of a recipe step that will
discretize numeric data (e.g. integers or doubles) into bins in a
supervised way using an XgBoost model.
step_discretize_xgb( recipe, ..., role = NA, trained = FALSE, outcome = NULL, sample_val = 0.2, learn_rate = 0.3, num_breaks = 10, tree_depth = 1, min_n = 5, rules = NULL, skip = FALSE, id = rand_id("discretize_xgb") ) # S3 method for step_discretize_xgb tidy(x, ...)
recipe | A recipe object. The step will be added to the sequence of operations for this recipe. |
---|---|
... | One or more selector functions to choose which variables are
affected by the step. See |
role | Defaults to |
trained | A logical to indicate if the quantities for preprocessing have been estimated. |
outcome | A call to |
sample_val | Share of data used for validation (with early stopping) of the learned splits (the rest is used for training). Defaults to 0.20. |
learn_rate | The rate at which the boosting algorithm adapts from
iteration-to-iteration. Corresponds to |
num_breaks | The maximum number of discrete bins to bucket continuous
features. Corresponds to |
tree_depth | The maximum depth of the tree (i.e. number of splits).
Corresponds to |
min_n | The minimum number of instances needed to be in each node.
Corresponds to |
rules | The splitting rules of the best XgBoost tree to retain for each variable. |
skip | A logical. Should the step be skipped when the
recipe is baked by |
id | A character string that is unique to this step to identify it. |
x | A |
An updated version of recipe
with the new step added to the
sequence of existing steps (if any).
step_discretize_xgb()
creates non-uniform bins from numerical
variables by utilizing the information about the outcome variable and
applying the xgboost model. It is advised to impute missing values before
this step. This step is intended to be used particularly with linear models
because thanks to creating non-uniform bins it becomes easier to learn
non-linear patterns from the data.
The best selection of buckets for each variable is selected using an internal early stopping scheme implemented in the xgboost package, which makes this discretization method prone to overfitting.
The pre-defined values of the underlying xgboost learns good
and reasonably complex results. However, if one wishes to tune them the
recommended path would be to first start with changing the value of
num_breaks
to e.g.: 20 or 30. If that doesn't give satisfactory results
one could experiment with modifying the tree_depth
or min_n
parameters.
Note that it is not recommended to tune learn_rate
simultaneously with
other parameters.
This step requires the xgboost package. If not installed, the step will stop with a note about installing the package.
Note that the original data will be replaced with the new bins.
library(modeldata) data(credit_data) library(rsample) split <- initial_split(credit_data, strata = "Status") credit_data_tr <- training(split) credit_data_te <- testing(split) xgb_rec <- recipe(Status ~ ., data = credit_data_tr) %>% step_medianimpute(all_numeric()) %>% step_discretize_xgb(all_numeric(), outcome = "Status") xgb_rec <- prep(xgb_rec, training = credit_data_tr)#> Warning: More than 20 unique training set values are required. Predictors 'Time' were not processed; their original values will be used.#> Warning: first element used of 'length.out' argument#> Warning: `step_discretize_xgb()` failed to create a tree with error for predictor 'Debt', which was not binned. The error: argument must be coercible to non-negative integerbake(xgb_rec, credit_data_te, Price)#> # A tibble: 1,113 x 1 #> Price #> <fct> #> 1 [1055, Inf] #> 2 [1055, Inf] #> 3 [-Inf,1055) #> 4 [1055, Inf] #> 5 [1055, Inf] #> 6 [1055, Inf] #> 7 [1055, Inf] #> 8 [1055, Inf] #> 9 [1055, Inf] #> 10 [-Inf,1055) #> # … with 1,103 more rows