Path: blob/master/deep_learning/tabular/deep_learning_tabular.ipynb
1480 views
Deep Learning for Tabular Data
While deep learning's achievements are often highlighted in areas like computer vision and natural language processing, a lesser-discussed yet potent application involves applying deep learning to tabular data.
A key technique to maximize deep learning's potential with tabular data involves using embeddings for categorical variables [4]. This means representing categories in a lower-dimensional numeric space, capturing intricate relationships between them. For instance, this could reveal geographic connections between high-cardinality categorical features like zip codes, without explicit guidance. Even for continuous features such as days of the week, it's still worth exploring the potential advantages of treating them as categorical features and utilizing embeddings.
Furthermore, embeddings offer benefits beyond their initial use. Once trained, these embeddings can be employed in other contexts. For example, they can serve as features for tree-based models, granting them the enriched knowledge gleaned from deep learning. This cross-application of embeddings underscores their versatility and their ability to enhance various modeling techniques.
In this article, we'll be looking at some bare minimum steps for training a self-defined deep learning model and training it using huggingface Trainer.
Data Preprocessing
We'll be using a downsampled criteo dataset, which originated from a Kaggle competition [2]. Though after the competition ended, those original data files became unavailable on the platform. We turned to an alternative source for downloading a similar dataset [1]. Each row corresponds to a display ad served by Criteo. Positive (clicked) and negatives (non-clicked) examples have both been subsampled at different rates in order to reduce the dataset size. Fields in this dataset includes:
Label: Target variable that indicates if an ad was clicked (1) or not (0).
I1-I13: A total of 13 columns of integer features (mostly count features).
C1-C26: A total of 26 columns of categorical features. The values of these features have been hashed onto 32 bits for anonymization purposes.
Unfortunately, the meanings of these features aren't disclosed.
Note, there are many ways to implement a data preprocessing step, the baseline approach we'll be performing here is to:
Encode categorical columns as distinct numerical ids.
Standardize/Scale numerical columns.
Given the un-balanced dataset, we perform random downsampling on the negative class for our training set, while keeping the test set unbalanced.
We'll specify a config mapping for tabular features that we'll be using across our batch collate function as well as model. This config mapping have features we wish to leverage as keys, and different value/enum specifying whether the field is numerical or categorical type. This will be beneficial to inform our model about the embedding size required for a categorical type as well as how many numerical fields are there to initiate the dense/feed forward layers.
Model
Our model architecture mainly involves: Converting categorical features into a low dimensonal embedding, these embedding outputs are then concatenated with rest of the dense features before feeding them into subsequent feed forward layers.
The next code block involves defining a config and model class following huggingface transformer's class structure [3]. This allows us to leverage its Trainer class for training and evaluating our models instead of writing custom training loops.
Rest of the code block defines boilerplate code for leveraging huggingface transformer's Trainer, as well as defining a compute_metrics
function for calculating standard binary classification related metrics.
End Notes
In this post, we walked through a baseline workflow for training tabular datasets using deep neural networks in PyTorch. Many works have cited the success of applying deep neural networks as part of their core recommendation stack, e.g. Youtube Recommendation [5] or Airbnb Search [6] [7]. Apart from making the model bigger/deeper for improving performance, we'll briefly touch upon some of their key learnings to conclude this article.
Heterogeneous Signals
Compared to matrix factorization based algorithms in collaborative filtering, it's easier to add diverse set of signals into the model.
For instance, in the context of Youtube recommendation:
Recommendation system particularly benefit from specialized features that capture historical behavior. This includes user's previous interaction with the item, how many videos has the user watched from a specific channel? Time since the user last watched a video on a particular topic. Apart from numerical features that are hand crafted, we can also include user's watch or search history as variable length sequence and have it mapped into a dense embedding representation.
In a retrieval + ranking staged system, candidate generation information can be propagated into ranking phase as features. e.g. which sources nominated a candidate and its assigned score.
Categorical variables' embedding can be shared. e.g. a single video id embedding can be leveraged across various features (impression video id, last video id watched by the user, seed video id for the recommendation).
While popular tree based models are invariant to scaling of individual features, neural networks are quite sensitive to them. Therefore, Normalizing continuous features is a must. Normalization can be done via Min/Max scaling, log-transformation, or standard normalization.
Recommendation system often exhibit some form of bias towards the past, as they are trained using prior data. For Youtube, adding a content's age on a platform allows the model to represent a video's time dependent behavior.
e.g. For Airbnb search:
Domain knowledge proves to be valuable in feature normalization. e.g. When dealing with geo location represented by latitude and longitude, instead of using the raw coordinates, we can calculate the offset from map's center displayed to the user. This allows the model to learn distance based global properties rather than specifics of individual geography. For learning local geography, a new categorical feature is created by taking city specified in the query, and the level 12 S2 cell for a listing. A hashing function then maps these two values (city and S2 cells) into an integer. For example, given the query "San Francisco" and a listing near the Embarcadero (S2 cell 539058204), hashing {"San Francisco", 539058204} -> 71829521 creates this categorical feature.
Position bias is also a notable topic in literature. This bias emerges when historical logs are used for training subsequent models. Introducing position as a feature while regularizing by dropout was proposed as strategies for mitigating this bias.
Reference
[1] Criteo 1TB Click Logs dataset
[2] Kaggle Competition - Display Advertising Challenge
[3] Transformers Doc - Sharing custom models
[4] Blog: An Introduction to Deep Learning for Tabular Data
[5] Paul Covington, Jay Adams, Emre Sargin - Deep Neural Networks for YouTube Recommendations (2016)
[6] Malay Haldar, Mustafa Abdool, Prashant Ramanathan, Tao Xu, Shulin Yang, Huizhong Duan, Qing Zhang, Nick Barrow-Williams, Bradley C. Turnbull, Brendan M. Collins, Thomas Legrand - Applying Deep Learning To Airbnb Search (2018)
[7] Malay Haldar, Mustafa Abdool, Prashant Ramanathan, Tyler Sax, Lanbo Zhang, Aamir Mansawala, Shulin Yang, Bradley Turnbull, Junshuo Liao - Improving Deep Learning For Airbnb Search (2020)