| | |
| | |
Stat |
Members: 3645 Articles: 2'506'133 Articles rated: 2609
26 April 2024 |
|
| | | |
|
Article overview
| |
|
Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation | Po-Han Chi
; Pei-Hung Chung
; Tsung-Han Wu
; Chun-Cheng Hsieh
; Shang-Wen Li
; Hung-yi Lee
; | Date: |
18 May 2020 | Abstract: | For self-supervised speech processing, it is crucial to use pretrained models
as speech representation extractors. In recent works, increasing the size of
the model has been utilized in acoustic model training in order to achieve
better performance. In this paper, we propose Audio ALBERT, a lite version of
the self-supervised speech representation model. We use the representations
with two downstream tasks, speaker identification, and phoneme classification.
We show that Audio ALBERT is capable of achieving competitive performance with
those huge models in the downstream tasks while utilizing 91\% fewer
parameters. Moreover, we use some simple probing models to measure how much the
information of the speaker and phoneme is encoded in latent representations. In
probing experiments, we find that the latent representations encode richer
information of both phoneme and speaker than that of the last layer. | Source: | arXiv, 2005.8575 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
browser Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
|
| |
|
|
|
| News, job offers and information for researchers and scientists:
| |