| | |
| | |
Stat |
Members: 3645 Articles: 2'501'711 Articles rated: 2609
19 April 2024 |
|
| | | |
|
Article overview
| |
|
Latent Attention Networks | Christopher Grimm
; Dilip Arumugam
; Siddharth Karamcheti
; David Abel
; Lawson L.S. Wong
; Michael L. Littman
; | Date: |
2 Jun 2017 | Abstract: | Deep neural networks are able to solve tasks across a variety of domains and
modalities of data. Despite many empirical successes, we lack the ability to
clearly understand and interpret the learned internal mechanisms that
contribute to such effective behaviors or, more critically, failure modes. In
this work, we present a general method for visualizing an arbitrary neural
network’s inner mechanisms and their power and limitations. Our dataset-centric
method produces visualizations of how a trained network attends to components
of its inputs. The computed "attention masks" support improved interpretability
by highlighting which input attributes are critical in determining output. We
demonstrate the effectiveness of our framework on a variety of deep neural
network architectures in domains from computer vision, natural language
processing, and reinforcement learning. The primary contribution of our
approach is an interpretable visualization of attention that provides unique
insights into the network’s underlying decision-making process irrespective of
the data modality. | Source: | arXiv, 1706.0536 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |