Vietnamese Language Models
3 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Vietnamese Language Models
Most implemented papers
PhoBERT: Pre-trained language models for Vietnamese
We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual language models pre-trained for Vietnamese.
VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
We release the VLUE benchmark to promote research on building vision-language models that generalize well to more diverse images and concepts unseen during pre-training, and are practical in terms of efficiency-performance trade-off.
ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing
English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based language models for natural language processing tasks.