| | |
| | |
Stat |
Members: 3667 Articles: 2'599'751 Articles rated: 2609
09 February 2025 |
|
| | | |
|
Article overview
| |
|
Neural Network Architecture Optimization through Submodularity and Supermodularity | Junqi Jin
; Ziang Yan
; Kun Fu
; Nan Jiang
; Changshui Zhang
; | Date: |
1 Sep 2016 | Abstract: | Deep learning models’ architectures, including depth and width, are key
factors influencing models’ performance, such as test accuracy and computation
time. This paper solves two problems: given computation time budget, choose an
architecture to maximize accuracy, and given accuracy requirement, choose an
architecture to minimize computation time. We convert this architecture
optimization into a subset selection problem. With accuracy’s submodularity and
computation time’s supermodularity, we propose efficient greedy optimization
algorithms. The experiments demonstrate our algorithm’s ability to find more
accurate models or faster models. By analyzing architecture evolution with
growing time budget, we discuss relationships among accuracy, time and
architecture, and give suggestions on neural network architecture design. | Source: | arXiv, 1609.0074 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|