Spectral Normalization is a normalization technique used for generative adversarial networks, used to stabilize training of the discriminator. Spectral normalization has the convenient property that the Lipschitz constant is the only hyper-parameter to be tuned.
It controls the Lipschitz constant of the discriminator $f$ by constraining the spectral norm of each layer $g : \textbf{h}_{in} \rightarrow \textbf{h}_{out}$. The Lipschitz norm $\Vert{g}\Vert_{\text{Lip}}$ is equal to $\sup_{\textbf{h}}\sigma\left(\nabla{g}\left(\textbf{h}\right)\right)$, where $\sigma\left(a\right)$ is the spectral norm of the matrix $A$ ($L_{2}$ matrix norm of $A$):
$$ \sigma\left(a\right) = \max_{\textbf{h}:\textbf{h}\neq{0}}\frac{\Vert{A\textbf{h}}\Vert_{2}}{\Vert\textbf{h}\Vert_{2}} = \max_{\Vert\textbf{h}\Vert_{2}\leq{1}}{\Vert{A\textbf{h}}\Vert_{2}} $$
which is equivalent to the largest singular value of $A$. Therefore for a linear layer $g\left(\textbf{h}\right) = W\textbf{h}$ the norm is given by $\Vert{g}\Vert_{\text{Lip}} = \sup_{\textbf{h}}\sigma\left(\nabla{g}\left(\textbf{h}\right)\right) = \sup_{\textbf{h}}\sigma\left(W\right) = \sigma\left(W\right) $. Spectral normalization normalizes the spectral norm of the weight matrix $W$ so it satisfies the Lipschitz constraint $\sigma\left(W\right) = 1$:
$$ \bar{W}_{\text{SN}}\left(W\right) = W / \sigma\left(W\right) $$
Source: Spectral Normalization for Generative Adversarial NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 56 | 17.45% |
Conditional Image Generation | 20 | 6.23% |
Translation | 12 | 3.74% |
Reinforcement Learning (RL) | 11 | 3.43% |
Image-to-Image Translation | 10 | 3.12% |
Super-Resolution | 9 | 2.80% |
Multi-agent Reinforcement Learning | 9 | 2.80% |
Image Classification | 7 | 2.18% |
Decision Making | 5 | 1.56% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |