Science-advisor
REGISTER info/FAQ
Login
username
password
     
forgot password?
register here
 
Research articles
  search articles
  reviews guidelines
  reviews
  articles index
My Pages
my alerts
  my messages
  my reviews
  my favorites
 
 
Stat
Members: 3645
Articles: 2'501'711
Articles rated: 2609

20 April 2024
 
  » arxiv » 2010.11724

 Article overview


LID 2020: The Learning from Imperfect Data Challenge Results
Yunchao Wei ; Shuai Zheng ; Ming-Ming Cheng ; Hang Zhao ; Liwei Wang ; Errui Ding ; Yi Yang ; Antonio Torralba ; Ting Liu ; Guolei Sun ; Wenguan Wang ; Luc Van Gool ; Wonho Bae ; Junhyug Noh ; Jinhwan Seo ; Gunhee Kim ; Hao Zhao ; Ming Lu ; Anbang Yao ; Yiwen Guo ; Yurong Chen ; Li Zhang ; Chuangchuang Tan ; Tao Ruan ; Guanghua Gu ; Shikui Wei ; Yao Zhao ; Mariia Dobko ; Ostap Viniavskyi ; Oles Dobosevych ; Zhendong Wang ; Zhenyuan Chen ; Chen Gong ; Huanqing Yan ; Jun He ;
Date 17 Oct 2020
AbstractLearning from imperfect data becomes an issue in many industrial applications after the research community has made profound progress in supervised learning from perfectly annotated datasets. The purpose of the Learning from Imperfect Data (LID) workshop is to inspire and facilitate the research in developing novel approaches that would harness the imperfect data and improve the data-efficiency during training. A massive amount of user-generated data nowadays available on multiple internet services. How to leverage those and improve the machine learning models is a high impact problem. We organize the challenges in conjunction with the workshop. The goal of these challenges is to find the state-of-the-art approaches in the weakly supervised learning setting for object detection, semantic segmentation, and scene parsing. There are three tracks in the challenge, i.e., weakly supervised semantic segmentation (Track 1), weakly supervised scene parsing (Track 2), and weakly supervised object localization (Track 3). In Track 1, based on ILSVRC DET, we provide pixel-level annotations of 15K images from 200 categories for evaluation. In Track 2, we provide point-based annotations for the training set of ADE20K. In Track 3, based on ILSVRC CLS-LOC, we provide pixel-level annotations of 44,271 images for evaluation. Besides, we further introduce a new evaluation metric proposed by cite{zhang2020rethinking}, i.e., IoU curve, to measure the quality of the generated object localization maps. This technical report summarizes the highlights from the challenge. The challenge submission server and the leaderboard will continue to open for the researchers who are interested in it. More details regarding the challenge and the benchmarks are available at this https URL
Source arXiv, 2010.11724
Services Forum | Review | PDF | Favorites   
 
Visitor rating: did you like this article? no 1   2   3   4   5   yes

No review found.
 Did you like this article?

This article or document is ...
important:
of broad interest:
readable:
new:
correct:
Global appreciation:

  Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.

browser Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)






ScienXe.org
» my Online CV
» Free


News, job offers and information for researchers and scientists:
home  |  contact  |  terms of use  |  sitemap
Copyright © 2005-2024 - Scimetrica