Science-advisor
REGISTER info/FAQ
Login
username
password
     
forgot password?
register here
 
Research articles
  search articles
  reviews guidelines
  reviews
  articles index
My Pages
my alerts
  my messages
  my reviews
  my favorites
 
 
Stat
Members: 3657
Articles: 2'599'751
Articles rated: 2609

14 October 2024
 
  » arxiv » 2206.00267

 Article overview



LPFS: Learnable Polarizing Feature Selection for Click-Through Rate Prediction
Yi Guo ; Zhaocheng Liu ; Jianchao Tan ; Chao Liao ; Daqing Chang ; Qiang Liu ; Sen Yang ; Ji Liu ; Dongying Kong ; Zhi Chen ; Chengru Song ;
Date 1 Jun 2022
AbstractIn industry, feature selection is a standard but necessary step to search for an optimal set of informative feature fields for efficient and effective training of deep Click-Through Rate (CTR) models. Most previous works measure the importance of feature fields by using their corresponding continuous weights from the model, then remove the feature fields with small weight values. However, removing many features that correspond to small but not exact zero weights will inevitably hurt model performance and not be friendly to hot-start model training. There is also no theoretical guarantee that the magnitude of weights can represent the importance, thus possibly leading to sub-optimal results if using these methods.
To tackle this problem, we propose a novel Learnable Polarizing Feature Selection (LPFS) method using a smoothed-$ell^0$ function in literature. Furthermore, we extend LPFS to LPFS++ by our newly designed smoothed-$ell^0$-liked function to select a more informative subset of features. LPFS and LPFS++ can be used as gates inserted at the input of the deep network to control the active and inactive state of each feature. When training is finished, some gates are exact zero, while others are around one, which is particularly favored by the practical hot-start training in the industry, due to no damage to the model performance before and after removing the features corresponding to exact-zero gates. Experiments show that our methods outperform others by a clear margin, and have achieved great A/B test results in KuaiShou Technology.
Source arXiv, 2206.00267
Services Forum | Review | PDF | Favorites   
 
Visitor rating: did you like this article? no 1   2   3   4   5   yes

No review found.
 Did you like this article?

This article or document is ...
important:
of broad interest:
readable:
new:
correct:
Global appreciation:

  Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.






ScienXe.org
» my Online CV
» Free

home  |  contact  |  terms of use  |  sitemap
Copyright © 2005-2024 - Scimetrica