Transformers

GPT-3

Introduced by Brown et al. in Language Models are Few-Shot Learners

GPT-3 is an autoregressive transformer model with 175 billion parameters. It uses the same architecture/model as GPT-2, including the modified initialization, pre-normalization, and reversible tokenization, with the exception that GPT-3 uses alternating dense and locally banded sparse attention patterns in the layers of the transformer, similar to the Sparse Transformer.

Source: Language Models are Few-Shot Learners

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 76 9.99%
Question Answering 50 6.57%
Large Language Model 47 6.18%
In-Context Learning 33 4.34%
Retrieval 32 4.20%
Code Generation 28 3.68%
Prompt Engineering 27 3.55%
Sentence 23 3.02%
Text Generation 19 2.50%

Categories