Word Sense Disambiguation
144 papers with code • 15 benchmarks • 15 datasets
The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet. For example, given the word “mouse” and the following sentence:
“A mouse consists of an object held in one's hand, with one or more buttons.”
we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).
Libraries
Use these libraries to find Word Sense Disambiguation models and implementationsDatasets
Most implemented papers
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).
Language Models are Few-Shot Learners
By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.
FlauBERT: Unsupervised Language Model Pre-training for French
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.
Enhancing Interpretable Clauses Semantically using Pretrained Word Representation
The approach significantly enhances the performance and interpretability of TM.
An Incremental Parser for Abstract Meaning Representation
We describe a transition-based parser for AMR that parses sentences left-to-right, in linear time.
GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge
Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context.
Hungry Hungry Hippos: Towards Language Modeling with State Space Models
First, we use synthetic language modeling tasks to understand the gap between SSMs and attention.
Using Distributed Representations to Disambiguate Biomedical and Clinical Concepts
In this paper, we report a knowledge-based method for Word Sense Disambiguation in the domains of biomedical and clinical text.
Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation
In this article, we tackle the issue of the limited quantity of manually sense annotated corpora for the task of word sense disambiguation, by exploiting the semantic relationships between senses such as synonymy, hypernymy and hyponymy, in order to compress the sense vocabulary of Princeton WordNet, and thus reduce the number of different sense tags that must be observed to disambiguate all words of the lexical database.