| | |
| | |
Stat |
Members: 3667 Articles: 2'599'751 Articles rated: 2609
15 February 2025 |
|
| | | |
|
Article overview
| |
|
Emergent collective intelligence from massive-agent cooperation and competition | Hanmo Chen
; Stone Tao
; Jiaxin Chen
; Weihan Shen
; Xihui Li
; Chenghui Yu
; Sikai Cheng
; Xiaolong Zhu
; Xiu Li
; | Date: |
4 Jan 2023 | Abstract: | Inspired by organisms evolving through cooperation and competition between
different populations on Earth, we study the emergence of artificial collective
intelligence through massive-agent reinforcement learning. To this end, We
propose a new massive-agent reinforcement learning environment, Lux, where
dynamic and massive agents in two teams scramble for limited resources and
fight off the darkness. In Lux, we build our agents through the standard
reinforcement learning algorithm in curriculum learning phases and leverage
centralized control via a pixel-to-pixel policy network. As agents co-evolve
through self-play, we observe several stages of intelligence, from the
acquisition of atomic skills to the development of group strategies. Since
these learned group strategies arise from individual decisions without an
explicit coordination mechanism, we claim that artificial collective
intelligence emerges from massive-agent cooperation and competition. We further
analyze the emergence of various learned strategies through metrics and
ablation studies, aiming to provide insights for reinforcement learning
implementations in massive-agent environments. | Source: | arXiv, 2301.01609 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|