A Sparse Autoencoder is a type of autoencoder that employs sparsity to achieve an information bottleneck. Specifically the loss function is constructed so that activations are penalized within a layer. The sparsity constraint can be imposed with L1 regularization or a KL divergence between expected average neuron activation to an ideal distribution $p$.
Image: Jeff Jordan. Read his blog post (click) for a detailed summary of autoencoders.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
General Classification | 4 | 7.41% |
Classification | 3 | 5.56% |
Denoising | 3 | 5.56% |
Dictionary Learning | 2 | 3.70% |
EEG | 2 | 3.70% |
Electroencephalogram (EEG) | 2 | 3.70% |
Dimensionality Reduction | 2 | 3.70% |
Clustering | 2 | 3.70% |
Small Data Image Classification | 2 | 3.70% |