An Autoencoder is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder).
Image: Michael Massi
Source: Reducing the Dimensionality of Data with Neural NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Decoder | 43 | 6.22% |
Anomaly Detection | 36 | 5.21% |
Denoising | 28 | 4.05% |
Self-Supervised Learning | 24 | 3.47% |
Image Generation | 20 | 2.89% |
Semantic Segmentation | 18 | 2.60% |
Dimensionality Reduction | 18 | 2.60% |
Disentanglement | 14 | 2.03% |
Quantization | 14 | 2.03% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |