| | |
| | |
Stat |
Members: 3645 Articles: 2'500'096 Articles rated: 2609
19 April 2024 |
|
| | | |
|
Article overview
| |
|
Bayesian Layers: A Module for Neural Network Uncertainty | Dustin Tran
; Dusenberry Mike
; Mark van der Wilk
; Danijar Hafner
; | Date: |
10 Dec 2018 | Abstract: | We describe Bayesian Layers, a module designed for fast experimentation with
neural network uncertainty. It extends neural network libraries with layers
capturing uncertainty over weights (Bayesian neural nets), pre-activation units
(dropout), activations ("stochastic output layers"), and the function itself
(Gaussian processes). With reversible layers, one can also propagate
uncertainty from input to output such as for flow-based distributions and
constant-memory backpropagation. Bayesian Layers are a drop-in replacement for
other layers, maintaining core features that one typically desires for
experimentation. As demonstration, we fit a 10-billion parameter "Bayesian
Transformer" on 512 TPUv2 cores, which replaces attention layers with their
Bayesian counterpart. | Source: | arXiv, 1812.3973 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |