ESACL, or Enhanced Seq2Seq Autoencoder via Contrastive Learning, is a denoising sequence-to-sequence (seq2seq) autoencoder via contrastive learning for abstractive text summarization. The model adopts a standard Transformer-based architecture with a multilayer bi-directional encoder and an autoregressive decoder. To enhance its denoising ability, self-supervised contrastive learning is incorporated along with various sentence-level document augmentation.
Source: Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text SummarizationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Continual Learning | 1 | 16.67% |
Abstractive Text Summarization | 1 | 16.67% |
Decoder | 1 | 16.67% |
Denoising | 1 | 16.67% |
Sentence | 1 | 16.67% |
Text Summarization | 1 | 16.67% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |