step_feature_hash
creates a a specification of a recipe step that will
convert nominal data (e.g. character or factors) into one or more numeric
binary columns using the levels of the original data.
step_feature_hash( recipe, ..., role = "predictor", trained = FALSE, num_hash = 2^6, preserve = FALSE, columns = NULL, skip = FALSE, id = rand_id("feature_hash") ) # S3 method for step_feature_hash tidy(x, ...)
recipe | A recipe object. The step will be added to the sequence of operations for this recipe. |
---|---|
... | One or more selector functions to choose which factor
variables will be used to create the dummy variables. See |
role | For model terms created by this step, what analysis role should they be assigned?. By default, the function assumes that the binary dummy variable columns created by the original variables will be used as predictors in a model. |
trained | A logical to indicate if the quantities for preprocessing have been estimated. |
num_hash | The number of resulting dummy variable columns. |
preserve | A single logical; should the selected column(s) be retained (in addition to the new dummy variables)? |
columns | A character vector for the selected columns. This is |
skip | A logical. Should the step be skipped when the recipe is baked
by |
id | A character string that is unique to this step to identify it. |
x | A |
An updated version of recipe
with the new step added to the
sequence of existing steps (if any). For the tidy
method, a tibble with
columns terms
(the selectors or original variables selected).
step_feature_hash()
will create a set of binary dummy variables
from a factor or character variable. The values themselves are used to
determine which row that the dummy variable should be assigned (as opposed
to having a specific column that the value will map to).
Since this method does not rely on a pre-determined assignment of levels to columns, new factor levels can be added to the selected columns without issue. Missing values result in missing values for all of the hashed columns.
Note that the assignment of the levels to the hashing columns does not try
to maximize the allocation. It is likely that multiple levels of the column
will map to the same hashed columns (even with small data sets). Similarly,
it is likely that some columns will have all zeros. A zero-variance filter
(via recipes::step_zv()
) is recommended for any recipe that uses hashed
columns.
Weinberger, K, A Dasgupta, J Langford, A Smola, and J Attenberg. 2009. "Feature Hashing for Large Scale Multitask Learning." In Proceedings of the 26th Annual International Conference on Machine Learning, 1113–20. ACM.
Kuhn and Johnson (2020) Feature Engineering and Selection: A Practical Approach for Predictive Models. CRC/Chapman Hall https://bookdown.org/max/FES/encoding-predictors-with-many-categories.html
# \donttest{ data(okc, package = "modeldata") if (is_tf_available()) { # This may take a while: rec <- recipe(Class ~ age + location, data = okc) %>% step_feature_hash(location, num_hash = 2^6, preserve = TRUE) %>% prep() # How many of the 135 locations ended up in each hash column? results <- juice(rec, starts_with("location")) %>% distinct() apply(results %>% select(-location), 2, sum) %>% table() }#> . #> 0 1 2 3 4 5 6 #> 7 15 22 10 5 4 1# }