| | |
| | |
Stat |
Members: 3645 Articles: 2'500'096 Articles rated: 2609
19 April 2024 |
|
| | | |
|
Article overview
| |
|
AI Safety Gridworlds | Jan Leike
; Miljan Martic
; Victoria Krakovna
; Pedro A. Ortega
; Tom Everitt
; Andrew Lefrancq
; Laurent Orseau
; Shane Legg
; | Date: |
28 Nov 2017 | Abstract: | We present a suite of reinforcement learning environments illustrating
various safety properties of intelligent agents. These problems include safe
interruptibility, avoiding side effects, absent supervisor, reward gaming, safe
exploration, as well as robustness to self-modification, distributional shift,
and adversaries. To measure compliance with the intended safe behavior, we
equip each environment with a performance function that is hidden from the
agent. This allows us to categorize AI safety problems into robustness and
specification problems, depending on whether the performance function
corresponds to the observed reward function. We evaluate A2C and Rainbow, two
recent deep reinforcement learning agents, on our environments and show that
they are not able to solve them satisfactorily. | Source: | arXiv, 1711.9883 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser claudebot
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |