R_INLINE_MATH_1 Regularization is a regularization technique and gradient penalty for training generative adversarial networks. It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game.
This leads to the following regularization term:
$$ R_{1}\left(\psi\right) = \frac{\gamma}{2}E_{p_{D}\left(x\right)}\left[||\nabla{D_{\psi}\left(x\right)}||^{2}\right] $$
Source: Which Training Methods for GANs do actually Converge?Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 117 | 16.74% |
Disentanglement | 44 | 6.29% |
Image Manipulation | 32 | 4.58% |
Face Generation | 30 | 4.29% |
Face Recognition | 23 | 3.29% |
Decoder | 18 | 2.58% |
Image-to-Image Translation | 18 | 2.58% |
Face Swapping | 17 | 2.43% |
Super-Resolution | 15 | 2.15% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |