The approach encodes categorical data as multiple numeric variables using a word embedding approach. Originally intended as a way to take a large number of word identifiers and represent them in a smaller dimension. Good references on this are Guo and Berkhahn (2016) and Chapter 6 of Francois and Allaire (2018).

The methodology first translates the C factor levels as a set of integer values then randomly allocates them to the new D numeric columns. These columns are optionally connected in a neural network to an intermediate layer of hidden units. Optionally, other predictors can be added to the network in the usual way (via the predictors argument) that also link to the hidden layer. This implementation uses a single layer with ReLu activations. Finally, an output layer is used with either linear activation (for numeric outcomes) or softmax (for classification).

To translate this model to a set of embeddings, the coefficients of the original embedding layer are used to represent the original factor levels.

As an example, we use the Ames housing data where the sale price of houses are being predicted. One predictor, neighborhood, has the most factor levels of the predictors.

library(tidymodels)
data(ames)
length(levels(ames$Neighborhood))
## [1] 29

The distribution of data in the neighborhood is not uniform:

ggplot(ames, aes(x = Neighborhood)) + 
  geom_bar() + 
  coord_flip() + 
  xlab("") + 
  theme_bw()

Fo plotting later, we calculate the simple means per neighborhood:

means <- 
  ames %>%
  group_by(Neighborhood) %>%
  summarise(
    mean = mean(log10(Sale_Price)),
    n = length(Sale_Price),
    lon = median(Longitude),
    lat = median(Latitude)
  )
## `summarise()` ungrouping output (override with `.groups` argument)

We’ll fit a model with 10 hidden units and 3 encoding columns:

library(embed)
tf_embed <- 
  recipe(Sale_Price ~ ., data = ames) %>%
  step_log(Sale_Price, base = 10) %>%
  # Add some other predictors that can be used by the network. We
  # preprocess them first
  step_YeoJohnson(Lot_Area, Full_Bath, Gr_Liv_Area)  %>%
  step_range(Lot_Area, Full_Bath, Gr_Liv_Area)  %>%
  step_embed(
    Neighborhood, 
    outcome = vars(Sale_Price),
    predictors = vars(Lot_Area, Full_Bath, Gr_Liv_Area),
    num_terms = 5, 
    hidden_units = 10, 
    options = embed_control(epochs = 75, validation_split = 0.2)
  ) %>% 
  prep(training = ames)
## Set session seed to 8119 (disabled GPU, CPU parallelism)
theme_set(theme_bw() + theme(legend.position = "top"))

tf_embed$steps[[4]]$history %>%
  filter(epochs > 1) %>%
  ggplot(aes(x = epochs, y = loss, col = type)) + 
  geom_line() + 
  scale_y_log10() 

The embeddings are obtained using the tidy method:

hood_coef <- 
  tidy(tf_embed, number = 4) %>%
  dplyr::select(-terms, -id)  %>%
  dplyr::rename(Neighborhood = level) %>%
  # Make names smaller
  rename_at(vars(contains("emb")), funs(gsub("Neighborhood_", "", ., fixed = TRUE)))
hood_coef
## # A tibble: 30 x 6
##     embed_1  embed_2  embed_3  embed_4  embed_5 Neighborhood      
##       <dbl>    <dbl>    <dbl>    <dbl>    <dbl> <chr>             
##  1 -0.0484   0.0478   0.0108   0.0321   0.0406  ..new             
##  2 -0.0200   0.0300   0.00855  0.0154   0.0687  North_Ames        
##  3  0.0243   0.0273   0.0647  -0.0558   0.0286  College_Creek     
##  4 -0.00260 -0.0261   0.0155   0.0272  -0.00515 Old_Town          
##  5  0.0514  -0.00421 -0.0146  -0.0141   0.0225  Edwards           
##  6  0.0165  -0.0346   0.0883  -0.0607   0.0606  Somerset          
##  7  0.0432  -0.0231   0.117   -0.0236   0.0693  Northridge_Heights
##  8 -0.0268   0.0137   0.0447  -0.00947  0.0493  Gilbert           
##  9 -0.0131  -0.0264   0.0327  -0.0447   0.00940 Sawyer            
## 10 -0.0113  -0.00297  0.0555  -0.00558  0.0156  Northwest_Ames    
## # … with 20 more rows
hood_coef <- 
  hood_coef %>% 
  inner_join(means, by = "Neighborhood")
hood_coef
## # A tibble: 28 x 10
##     embed_1  embed_2  embed_3  embed_4  embed_5 Neighborhood  mean     n   lon
##       <dbl>    <dbl>    <dbl>    <dbl>    <dbl> <chr>        <dbl> <int> <dbl>
##  1 -0.0200   0.0300   0.00855  0.0154   0.0687  North_Ames    5.15   443 -93.6
##  2  0.0243   0.0273   0.0647  -0.0558   0.0286  College_Cre…  5.29   267 -93.7
##  3 -0.00260 -0.0261   0.0155   0.0272  -0.00515 Old_Town      5.07   239 -93.6
##  4  0.0514  -0.00421 -0.0146  -0.0141   0.0225  Edwards       5.09   194 -93.7
##  5  0.0165  -0.0346   0.0883  -0.0607   0.0606  Somerset      5.35   182 -93.6
##  6  0.0432  -0.0231   0.117   -0.0236   0.0693  Northridge_…  5.49   166 -93.7
##  7 -0.0268   0.0137   0.0447  -0.00947  0.0493  Gilbert       5.27   165 -93.6
##  8 -0.0131  -0.0264   0.0327  -0.0447   0.00940 Sawyer        5.13   151 -93.7
##  9 -0.0113  -0.00297  0.0555  -0.00558  0.0156  Northwest_A…  5.27   131 -93.6
## 10 -0.0301   0.0239   0.0742  -0.0493  -0.00919 Sawyer_West   5.25   125 -93.7
## # … with 18 more rows, and 1 more variable: lat <dbl>

We can make a simple, interactive plot of the new features versus the outcome:

tf_plot <- 
  hood_coef %>%
  dplyr::select(-lon, -lat) %>%
  gather(variable, value, starts_with("embed")) %>%
  # Clean up the embedding names and add a new variable as a hover-over/tool tip
  # aesthetic for the plot
  mutate(
    label = paste0(gsub("_", " ", Neighborhood), " (n=", n, ")"),
    variable = gsub("_", " ", variable)
    ) %>%
  ggplot(aes(x = value, y = mean)) + 
  geom_point_interactive(aes(size = sqrt(n), tooltip = label), alpha = .5) + 
  facet_wrap(~variable, scales = "free_x") + 
  theme_bw() + 
  theme(legend.position = "top") + 
  ylab("Mean (log scale)") + 
  xlab("Embedding")

# Convert the plot to a format that the html file can handle
ggiraph(ggobj = tf_plot)

However, this has induced some between-predictor correlations:

hood_coef %>% 
  dplyr::select(contains("emb")) %>% 
  cor() %>%
  round(2)
##         embed_1 embed_2 embed_3 embed_4 embed_5
## embed_1    1.00    0.05    0.41   -0.13    0.27
## embed_2    0.05    1.00    0.04    0.10   -0.07
## embed_3    0.41    0.04    1.00   -0.25    0.32
## embed_4   -0.13    0.10   -0.25    1.00   -0.13
## embed_5    0.27   -0.07    0.32   -0.13    1.00