Large language models (LLMs), including both proprietary and open-source models, have showcased remarkable capabilities in addressing a wide range of downstream tasks.
Dense retrievers are widely used in information retrieval and have also been successfully extended to other knowledge intensive areas such as language models, e. g., Retrieval-Augmented Generation (RAG) systems.
Modeling real-world problems with partial differential equations (PDEs) is a prominent topic in scientific machine learning.
Retrieval Augmented Generation (RAG) enhances the abilities of Large Language Models (LLMs) by enabling the retrieval of documents into the LLM context to provide more accurate and relevant responses.
We explore generalization across Meta MMO by learning to play several minigames with a single set of weights.
We adapt three types of existing CL methodologies, replay-based, regularization-based, and parameter-isolation-based methods to generative tasks and introduce comprehensive benchmarks for CLoG that feature great diversity and broad task coverage.
In this paper, we study how open-source large language models (LLMs) can be effectively deployed for improving query rewriting in conversational search, especially for ambiguous queries.
In this paper, we focus on the notion of integrally private DNNs to detect concept drifts.
In this paper, we propose a Masked Video Modeling (MVM)-powered compression framework that particularly preserves video semantics, by jointly mining and compressing the semantics in a self-supervised manner.
However, here we find this strategy does not work for DSR, where even low magnitude parameters can contribute considerably to the system dynamics.