| | |
| | |
Stat |
Members: 3665 Articles: 2'599'751 Articles rated: 2609
17 January 2025 |
|
| | | |
|
Article overview
| |
|
Learning to Maximize Mutual Information for Dynamic Feature Selection | Ian Covert
; Wei Qiu
; Mingyu Lu
; Nayoon Kim
; Nathan White
; Su-In Lee
; | Date: |
2 Jan 2023 | Abstract: | Feature selection helps reduce data acquisition costs in ML, but the standard
approach is to train models with static feature subsets. Here, we consider the
dynamic feature selection (DFS) problem where a model sequentially queries
features based on the presently available information. DFS is often addressed
with reinforcement learning (RL), but we explore a simpler approach of greedily
selecting features based on their conditional mutual information. This method
is theoretically appealing but requires oracle access to the data distribution,
so we develop a learning approach based on amortized optimization. The proposed
method is shown to recover the greedy policy when trained to optimality and
outperforms numerous existing feature selection methods in our experiments,
thus validating it as a simple but powerful approach for this problem. | Source: | arXiv, 2301.00557 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|