16k
56 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in 16k
Most implemented papers
SMYRF: Efficient Attention using Asymmetric Clustering
We also show that SMYRF can be used interchangeably with dense attention before and after training.
COUGH: A Challenge Dataset and Models for COVID-19 FAQ Retrieval
For evaluation, we introduce Query Bank and Relevance Set, where the former contains 1, 236 human-paraphrased queries while the latter contains ~32 human-annotated FAQ items for each query.
BNLP: Natural language processing toolkit for Bengali language
BNLP is an open source language processing toolkit for Bengali language consisting with tokenization, word embedding, POS tagging, NER tagging facilities.
DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in Darts using a Single Camera
In the primary dataset containing 15k images captured from a face-on view of the dartboard using a smartphone, DeepDarts predicted the total score correctly in 94. 7% of the test images.
A Multi-Task Network for Joint Specular Highlight Detection and Removal
Specular highlight detection and removal are fundamental and challenging tasks.
Complex Temporal Question Answering on Knowledge Graphs
This work presents EXAQT, the first end-to-end system for answering complex temporal questions that have multiple entities and predicates, and associated temporal conditions.
MapReader: A Computer Vision Pipeline for the Semantic Exploration of Maps at Scale
We present MapReader, a free, open-source software library written in Python for analyzing large map collections (scanned or born-digital).
There is a Time and Place for Reasoning Beyond the Image
For example, in Figure 1, we can find a way to identify the news articles related to the picture through segment-wise understandings of the signs, the buildings, the crowds, and more.
Hierarchical Nearest Neighbor Graph Embedding for Efficient Dimensionality Reduction
Dimensionality reduction is crucial both for visualization and preprocessing high dimensional data for machine learning.
Investigating Efficiently Extending Transformers for Long Input Summarization
While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge.