| | |
| | |
Stat |
Members: 3645 Articles: 2'506'133 Articles rated: 2609
26 April 2024 |
|
| | | |
|
Article overview
| |
|
Improving unsupervised neural aspect extraction for online discussions using out-of-domain classification | Anton Alekseev
; Elena Tutubalina
; Valentin Malykh
; Sergey Nikolenko
; | Date: |
17 Jun 2020 | Abstract: | Deep learning architectures based on self-attention have recently achieved
and surpassed state of the art results in the task of unsupervised aspect
extraction and topic modeling. While models such as neural attention-based
aspect extraction (ABAE) have been successfully applied to user-generated
texts, they are less coherent when applied to traditional data sources such as
news articles and newsgroup documents. In this work, we introduce a simple
approach based on sentence filtering in order to improve topical aspects
learned from newsgroups-based content without modifying the basic mechanism of
ABAE. We train a probabilistic classifier to distinguish between out-of-domain
texts (outer dataset) and in-domain texts (target dataset). Then, during data
preparation we filter out sentences that have a low probability of being
in-domain and train the neural model on the remaining sentences. The positive
effect of sentence filtering on topic coherence is demonstrated in comparison
to aspect extraction models trained on unfiltered texts. | Source: | arXiv, 2006.9766 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |