Introduction

embed is a package that contains extra steps for the recipes package for embedding predictors into one or more numeric columns. All of the preprocessing methods are supervised.

These steps are contained in a separate package because the package dependencies, rstanarm, lme4, and keras, are fairly heavy.

The steps for categorical predictors are:

• step_lencode_glm(), step_lencode_bayes(), and step_lencode_mixed() estimate the effect of each of the factor levels on the outcome and these estimates are used as the new encoding. The estimates are estimated by a generalized linear model. This step can be executed without pooling (via glm) or with partial pooling (stan_glm or lmer). Currently implemented for numeric and two-class outcomes.

• step_embed() uses keras::layer_embedding to translate the original C factor levels into a set of D new variables (< C). The model fitting routine optimizes which factor levels are mapped to each of the new variables as well as the corresponding regression coefficients (i.e., neural network weights) that will be used as the new encodings.

• step_woe() creates new variables based on weight of evidence encodings.

• step_feature_hash() can create indicator variables using feature hashing.

For numeric predictors:

• step_umap() uses a nonlinear transformation similar to t-SNE but can be used to project the transformation on new data. Both supervised and unsupervised methods can be used.

• step_discretize_xgb() and step_discretize_cart() can make binned versions of numeric predictors using supervised tree-based models.

Some references for these methods are:

Installation

To install the package:

install.packages("embed")

## for development version:
require("devtools")
install_github("tidymodels/embed")


Contributing

This project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.