COLA is a self-supervised pre-training approach for learning a general-purpose representation of audio. It is based on contrastive learning: it learns a representation which assigns high similarity to audio segments extracted from the same recording while assigning lower similarity to segments from different recordings.
Source: Contrastive Learning of General-Purpose Audio RepresentationsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Linguistic Acceptability | 3 | 6.38% |
Domain Generalization | 2 | 4.26% |
Language Modelling | 2 | 4.26% |
Sentence | 2 | 4.26% |
Autonomous Driving | 2 | 4.26% |
Semantic Segmentation | 2 | 4.26% |
Self-Supervised Learning | 2 | 4.26% |
Topological Data Analysis | 2 | 4.26% |
In-Context Learning | 1 | 2.13% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |