| | |
| | |
Stat |
Members: 3645 Articles: 2'506'133 Articles rated: 2609
27 April 2024 |
|
| | | |
|
Article overview
| |
|
Distributed Control by Lagrangian Steepest Descent | David H. Wolpert
; Stefan Bieniawski
; | Date: |
10 Mar 2004 | Subject: | Multiagent Systems; Computer Science and Game Theory; Adaptation and Self-Organizing Systems ACM-class: J.6; J.7; G.m | cs.MA cs.GT nlin.AO | Abstract: | Often adaptive, distributed control can be viewed as an iterated game between independent players. The coupling between the players’ mixed strategies, arising as the system evolves from one instant to the next, is determined by the system designer. Information theory tells us that the most likely joint strategy of the players, given a value of the expectation of the overall control objective function, is the minimizer of a Lagrangian function of the joint strategy. So the goal of the system designer is to speed evolution of the joint strategy to that Lagrangian minimizing point, lower the expectated value of the control objective function, and repeat. Here we elaborate the theory of algorithms that do this using local descent procedures, and that thereby achieve efficient, adaptive, distributed control. | Source: | arXiv, cs.MA/0403012 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |