• Stars
    star
    112
  • Rank 310,530 (Top 7 %)
  • Language
    Python
  • Created over 4 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Accelerating Inference for Recommendation Systems (WSDM'21)

DeepLight: Deep Lightweight Feature Interactions

Deploying the end-to-end deep factorization machines has a critical issue in prediction latency. To handle this issue, we study the acceleration of the prediction by conducting structural pruning for DeepFwFM, which ends up with 46X speed-ups without sacrifice of the state-of-the-art performance on Criteo dataset.

PWC

Please refer to the arXiv paper if you are interested in the details.

@inproceedings{deeplight,
  title={DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR Predictions in Ad Serving},
  author={Wei Deng and Junwei Pan and Tian Zhou and Deguang Kong and Aaron Flores and Guang Lin},
  booktitle={International Conference on Web Search and Data Mining (WSDM'21)},
  year={2021}
}

Environment

  1. Python2.7

  2. PyTorch

  3. Pandas

  4. Sklearn

How to run the dense models

The folder already has a tiny dataset to test. You can run the following models through

LR: logistic regression

$ python main_all.py -use_fm 0 -use_fwfm 0 -use_deep 0 -use_lw 0 -use_logit 1 > ./logs/all_logistic_regression

FM: factorization machine

$ python main_all.py -use_fm 1 -use_fwfm 0 -use_deep 0 -use_lw 0 > ./logs/all_fm_vanilla

FwFM: field weighted factorization machine

$ python main_all.py -use_fm 0 -use_fwfm 1 -use_deep 0 -use_lw 0 > ./logs/all_fwfm_vanilla

DeepFM: deep factorization machine

$ python main_all.py -use_fm 1 -use_fwfm 0 -use_deep 1 -use_lw 0 > ./logs/all_deepfm_vanilla

NFM: factorization machine

$ python NFM.py > ./logs/all_nfm

xDeepFM: extreme factorization machine

You may try the link here https://github.com/Leavingseason/xDeepFM

How to conduct strctural pruning

The default code gives 0.8123 AUC if apply 90% sparsity on the DNN component and the field matrix R and apply 40% (90%x0.444) on the embeddings.

python main_all.py -l2 6e-7 -n_epochs 10 -warm 2 -prune 1 -sparse 0.90  -prune_deep 1 -prune_fm 1 -prune_r 1 -use_fwlw 1 -emb_r 0.444 -emb_corr 1. > ./logs/deepfwfm_l2_6e_7_prune_all_and_r_warm_2_sparse_0.90_emb_r_0.444_emb_corr_1

Preprocess full dataset

The Criteo dataset has 2-class labels with 22 categorical features and 11 numerical features.

To download the full dataset, you can use the link below http://labs.criteo.com/2014/02/kaggle-display-advertising-challenge-dataset/

Unzip the raw data and save it in ./data/large folder:

tar xvzf dac.tar.gz

Move to the data folder and process the raw data.

$ python preprocess.py

When the dataset is ready, you need to change the files in main_all.py as follows

#result_dict = data_preprocess.read_data('./data/tiny_train_input.csv', './data/category_emb', criteo_num_feat_dim, feature_dim_start=0, dim=39)
#test_dict = data_preprocess.read_data('./data/tiny_test_input.csv', './data/category_emb', criteo_num_feat_dim, feature_dim_start=0, dim=39)
result_dict = data_preprocess.read_data('./data/large/train.csv', './data/large/criteo_feature_map', criteo_num_feat_dim, feature_dim_start=1, dim=39)
test_dict = data_preprocess.read_data('./data/large/valid.csv', './data/large/criteo_feature_map', criteo_num_feat_dim, feature_dim_start=1, dim=39)

How to analyze the prediction latency

You need to download this repo: https://github.com/uestla/Sparse-Matrix before you start.

After the setup, you can change the directory in line-23 of the cpp file to your local dir.

cd latency
g++ criteo_latency.cpp  -o criteo.out

To avoid setting the environment, you can also consider to test the compiled file directly.

./criteo.out

Acknowledgement

https://github.com/nzc/dnn_ctr

More Repositories

1

Sentiment-Analysis-in-Event-Driven-Stock-Price-Movement-Prediction

Use NLP to predict stock price movement associated with news
Python
781
star
2

Contour-Stochastic-Gradient-Langevin-Dynamics

An elegant adaptive importance sampling algorithms for simulations of multi-modal distributions (NeurIPS'20)
Jupyter Notebook
38
star
3

Automated-Equity-Asset-Selection-and-Allocation

Optimal portfolio selection
Python
33
star
4

Bayesian-Sparse-Deep-Learning

Code for An Adaptive Empirical Bayesian Method for Sparse Deep Learning (NeurIPS'19)
Python
18
star
5

Parallel-Solvers-for-Linear-System

Parallel Solver for Large-Scale Sparse Matrix Computations
C++
15
star
6

More-than-Algorithms

Refresh algorithms using C++ and Python
C++
13
star
7

Bayesian-Data-Analysis

An alternative to frequentist inference
R
10
star
8

two_sigma_financial_modeling

Kaggle Competition
Python
9
star
9

Variance_Reduced_Replica_Exchange_SGMCMC

Variance reduction in energy estimators accelerates the exponential convergence in deep learning (ICLR'21)
C
8
star
10

Interacting-Contour-Stochastic-Gradient-Langevin-Dynamics

A pleasantly parallel adaptive importance sampling algorithms for simulations of multi-modal distributions (ICLR'22)
Jupyter Notebook
7
star
11

Variational_Schrodinger_Diffusion_Model

A multi-variate transport-optimized diffusion models accelerated by simulation-free properties (ICML'24)
Jupyter Notebook
4
star
12

Non-reversible-Parallel-Tempering-for-Deep-Posterior-Approximation

Code for "Non-reversible Parallel Tempering for Deep Posterior Approximation (AAAI 2023)"
Roff
3
star
13

Global-Optimization-and-Monte-Carlo-Simulation-via-An-Adaptively-Weighted-Stochastic-Gradient-MCMC

Code for "An adaptively weighted stochastic gradient MCMC algorithm for Monte Carlo simulation and global optimization (Statistics and Computing 2022)"
Python
2
star
14

Jax_implementation_of_CSGLD

Test purposes
Python
2
star
15

Learn_Schrodinger_Bridge

Learn others code for practice
Python
1
star
16

Notes

1
star