| | |
| | |
Stat |
Members: 3645 Articles: 2'504'928 Articles rated: 2609
25 April 2024 |
|
| | | |
|
Article overview
| |
|
Deep High-Resolution Representation Learning for Visual Recognition | Jingdong Wang
; Ke Sun
; Tianheng Cheng
; Borui Jiang
; Chaorui Deng
; Yang Zhao
; Dong Liu
; Yadong Mu
; Mingkui Tan
; Xinggang Wang
; Wenyu Liu
; Bin Xiao
; | Date: |
20 Aug 2019 | Abstract: | High-resolution representations are essential for position-sensitive vision
problems, such as human pose estimation, semantic segmentation, and object
detection. Existing state-of-the-art frameworks first encode the input image as
a low-resolution representation through a subnetwork that is formed by
connecting high-to-low resolution convolutions emph{in series} (e.g., ResNet,
VGGNet), and then recover the high-resolution representation from the encoded
low-resolution representation. Instead, our proposed network, named as
High-Resolution Network (HRNet), maintains high-resolution representations
through the whole process. There are two key characteristics: (i) Connect the
high-to-low resolution convolution streams emph{in parallel}; (ii) Repeatedly
exchange the information across resolutions. The benefit is that the resulting
representation is semantically richer and spatially more precise. We show the
superiority of the proposed HRNet in a wide range of applications, including
human pose estimation, semantic segmentation, and object detection, suggesting
that the HRNet is a stronger backbone for computer vision problems. All the
codes are available at~{url{this https URL}}. | Source: | arXiv, 1908.7919 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |