| | |
| | |
Stat |
Members: 3667 Articles: 2'599'751 Articles rated: 2609
08 February 2025 |
|
| | | |
|
Article overview
| |
|
Learning Ambiguity from Crowd Sequential Annotations | Xiaolei Lu
; | Date: |
4 Jan 2023 | Abstract: | Most crowdsourcing learning methods treat disagreement between annotators as
noisy labelings while inter-disagreement among experts is often a good
indicator for the ambiguity and uncertainty that is inherent in natural
language. In this paper, we propose a framework called Learning Ambiguity from
Crowd Sequential Annotations (LA-SCA) to explore the inter-disagreement between
reliable annotators and effectively preserve confusing label information.
First, a hierarchical Bayesian model is developed to infer ground-truth from
crowds and group the annotators with similar reliability together. By modeling
the relationship between the size of group the annotator involved in, the
annotator’s reliability and element’s unambiguity in each sequence,
inter-disagreement between reliable annotators on ambiguous elements is
computed to obtain label confusing information that is incorporated to
cost-sensitive sequence labeling. Experimental results on POS tagging and NER
tasks show that our proposed framework achieves competitive performance in
inferring ground-truth from crowds and predicting unknown sequences, and
interpreting hierarchical clustering results helps discover labeling patterns
of annotators with similar reliability. | Source: | arXiv, 2301.01579 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|