Mixup is a data augmentation technique that generates a weighted combination of random image pairs from the training data. Given two images and their ground truth labels: $\left(x_{i}, y_{i}\right), \left(x_{j}, y_{j}\right)$, a synthetic training example $\left(\hat{x}, \hat{y}\right)$ is generated as:
$$ \hat{x} = \lambda{x_{i}} + \left(1 − \lambda\right){x_{j}} $$ $$ \hat{y} = \lambda{y_{i}} + \left(1 − \lambda\right){y_{j}} $$
where $\lambda \sim \text{Beta}\left(\alpha = 0.2\right)$ is independently sampled for each augmented example.
Source: mixup: Beyond Empirical Risk MinimizationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 62 | 7.28% |
Domain Adaptation | 50 | 5.87% |
Classification | 34 | 3.99% |
Unsupervised Domain Adaptation | 27 | 3.17% |
Semantic Segmentation | 24 | 2.82% |
Object Detection | 20 | 2.35% |
Text Classification | 16 | 1.88% |
Domain Generalization | 15 | 1.76% |
Graph Classification | 14 | 1.64% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |