| | |
| | |
Stat |
Members: 3643 Articles: 2'487'895 Articles rated: 2609
28 March 2024 |
|
| | | |
|
Article overview
| |
|
Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities | Hai Pham
; Paul Pu Liang
; Thomas Manzini
; Louis-Philippe Morency
; Barnabas Poczos
; | Date: |
19 Dec 2018 | Abstract: | Multimodal sentiment analysis is a core research area that studies speaker
sentiment expressed from the language, visual, and acoustic modalities. The
central challenge in multimodal learning involves inferring joint
representations that can process and relate information from these modalities.
However, existing work learns joint representations by requiring all modalities
as input and as a result, the learned representations may be sensitive to noisy
or missing modalities at test time. With the recent success of sequence to
sequence (Seq2Seq) models in machine translation, there is an opportunity to
explore new ways of learning joint representations that may not require all
input modalities at test time. In this paper, we propose a method to learn
robust joint representations by translating between modalities. Our method is
based on the key insight that translation from a source to a target modality
provides a method of learning joint representations using only the source
modality as input. We augment modality translations with a cycle consistency
loss to ensure that our joint representations retain maximal information from
all modalities. Once our translation model is trained with paired multimodal
data, we only need data from the source modality at test time for final
sentiment prediction. This ensures that our model remains robust from
perturbations or missing information in the other modalities. We train our
model with a coupled translation-prediction objective and it achieves new
state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI,
ICT-MMMO, and YouTube. Additional experiments show that our model learns
increasingly discriminative joint representations with more input modalities
while maintaining robustness to missing or perturbed modalities. | Source: | arXiv, 1812.7809 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser claudebot
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |