| | |
| | |
Stat |
Members: 3645 Articles: 2'500'096 Articles rated: 2609
18 April 2024 |
|
| | | |
|
Article overview
| |
|
Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases | Yu Gu
; Sue Kase
; Michelle Vanni
; Brian Sadler
; Percy Liang
; Xifeng Yan
; Yu Su
; | Date: |
16 Nov 2020 | Abstract: | Existing studies on question answering on knowledge bases (KBQA) mainly
operate with the standard i.i.d assumption, i.e., training distribution over
questions is the same as the test distribution. However, i.i.d may be neither
reasonably achievable nor desirable on large-scale KBs because 1) true user
distribution is hard to capture and 2) randomly sample training examples from
the enormous space would be highly data-inefficient. Instead, we suggest that
KBQA models should have three levels of built-in generalization: i.i.d,
compositional, and zero-shot. To facilitate the development of KBQA models with
stronger generalization, we construct and release a new large-scale,
high-quality dataset with 64,495 questions, GrailQA, and provide evaluation
settings for all three levels of generalization. In addition, we propose a
novel BERT-based KBQA model. The combination of our dataset and model enables
us to thoroughly examine and demonstrate, for the first time, the key role of
pre-trained contextual embeddings like BERT in the generalization of KBQA. | Source: | arXiv, 2011.07743 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser claudebot
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |