Linear Warmup With Linear Decay is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Retrieval | 152 | 15.82% |
Language Modelling | 88 | 9.16% |
Question Answering | 69 | 7.18% |
Large Language Model | 42 | 4.37% |
Sentence | 29 | 3.02% |
Information Retrieval | 26 | 2.71% |
Text Classification | 25 | 2.60% |
Text Generation | 24 | 2.50% |
Sentiment Analysis | 20 | 2.08% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |