| | |
| | |
Stat |
Members: 3669 Articles: 2'599'751 Articles rated: 2609
18 March 2025 |
|
| | | |
|
Article overview
| |
|
Flooding Regularization for Stable Training of Generative Adversarial Networks | Iu Yahiro
; Takashi Ishida
; Naoto Yokoya
; | Date: |
1 Nov 2023 | Abstract: | Generative Adversarial Networks (GANs) have shown remarkable performance in
image generation. However, GAN training suffers from the problem of
instability. One of the main approaches to address this problem is to modify
the loss function, often using regularization terms in addition to changing the
type of adversarial losses. This paper focuses on directly regularizing the
adversarial loss function. We propose a method that applies flooding, an
overfitting suppression method in supervised learning, to GANs to directly
prevent the discriminator’s loss from becoming excessively low. Flooding
requires tuning the flood level, but when applied to GANs, we propose that the
appropriate range of flood level settings is determined by the adversarial loss
function, supported by theoretical analysis of GANs using the binary cross
entropy loss. We experimentally verify that flooding stabilizes GAN training
and can be combined with other stabilization techniques. We also reveal that by
restricting the discriminator’s loss to be no greater than flood level, the
training proceeds stably even when the flood level is somewhat high. | Source: | arXiv, 2311.00318 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|