| | |
| | |
Stat |
Members: 3643 Articles: 2'487'895 Articles rated: 2609
28 March 2024 |
|
| | | |
|
Article overview
| |
|
Dynamic Future Net: Diversified Human Motion Generation | Wenheng Chen
; He Wang
; Yi Yuan
; Tianjia Shao
; Kun Zhou
; | Date: |
25 Aug 2020 | Abstract: | Human motion modelling is crucial in many areas such as computer graphics,
vision and virtual reality. Acquiring high-quality skeletal motions is
difficult due to the need for specialized equipment and laborious manual
post-posting, which necessitates maximizing the use of existing data to
synthesize new data. However, it is a challenge due to the intrinsic motion
stochasticity of human motion dynamics, manifested in the short and long terms.
In the short term, there is strong randomness within a couple frames, e.g. one
frame followed by multiple possible frames leading to different motion styles;
while in the long term, there are non-deterministic action transitions. In this
paper, we present Dynamic Future Net, a new deep learning model where we
explicitly focuses on the aforementioned motion stochasticity by constructing a
generative model with non-trivial modelling capacity in temporal stochasticity.
Given limited amounts of data, our model can generate a large number of
high-quality motions with arbitrary duration, and visually-convincing
variations in both space and time. We evaluate our model on a wide range of
motions and compare it with the state-of-the-art methods. Both qualitative and
quantitative results show the superiority of our method, for its robustness,
versatility and high-quality. | Source: | arXiv, 2009.05109 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser claudebot
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |