| | |
| | |
Stat |
Members: 3645 Articles: 2'503'724 Articles rated: 2609
23 April 2024 |
|
| | | |
|
Article overview
| |
|
Lifting Interpretability-Performance Trade-off via Automated Feature Engineering | Alicja Gosiewska
; Przemyslaw Biecek
; | Date: |
11 Feb 2020 | Abstract: | Complex black-box predictive models may have high performance, but lack of
interpretability causes problems like lack of trust, lack of stability,
sensitivity to concept drift. On the other hand, achieving satisfactory
accuracy of interpretable models require more time-consuming work related to
feature engineering. Can we train interpretable and accurate models, without
timeless feature engineering? We propose a method that uses elastic black-boxes
as surrogate models to create a simpler, less opaque, yet still accurate and
interpretable glass-box models. New models are created on newly engineered
features extracted with the help of a surrogate model. We supply the analysis
by a large-scale benchmark on several tabular data sets from the OpenML
database. There are two results 1) extracting information from complex models
may improve the performance of linear models, 2) questioning a common myth that
complex machine learning models outperform linear models. | Source: | arXiv, 2002.4267 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |