| | |
| | |
Stat |
Members: 3665 Articles: 2'599'751 Articles rated: 2609
25 January 2025 |
|
| | | |
|
Article overview
| |
|
In Quest of Ground Truth: Learning Confident Models and Estimating Uncertainty in the Presence of Annotator Noise | Asma Ahmed Hashmi
; Artem Agafonov
; Aigerim Zhumabayeva
; Mohammad Yaqub
; Martin Takáč
; | Date: |
2 Jan 2023 | Abstract: | The performance of the Deep Learning (DL) models depends on the quality of
labels. In some areas, the involvement of human annotators may lead to noise in
the data. When these corrupted labels are blindly regarded as the ground truth
(GT), DL models suffer from performance deficiency. This paper presents a
method that aims to learn a confident model in the presence of noisy labels.
This is done in conjunction with estimating the uncertainty of multiple
annotators.
We robustly estimate the predictions given only the noisy labels by adding
entropy or information-based regularizer to the classifier network. We conduct
our experiments on a noisy version of MNIST, CIFAR-10, and FMNIST datasets. Our
empirical results demonstrate the robustness of our method as it outperforms or
performs comparably to other state-of-the-art (SOTA) methods. In addition, we
evaluated the proposed method on the curated dataset, where the noise type and
level of various annotators depend on the input image style. We show that our
approach performs well and is adept at learning annotators’ confusion.
Moreover, we demonstrate how our model is more confident in predicting GT than
other baselines. Finally, we assess our approach for segmentation problem and
showcase its effectiveness with experiments. | Source: | arXiv, 2301.00524 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|