2 code implementations • 16 Jan 2024 • Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, Noah A. Smith
Despite the general capabilities of large pretrained language models, they consistently benefit from further adaptation to better achieve desired behaviors.
no code implementations • 16 Nov 2023 • YuHan Liu, Shangbin Feng, Xiaochuang Han, Vidhisha Balachandran, Chan Young Park, Sachin Kumar, Yulia Tsvetkov
In this work, we take a first step towards designing summarization systems that are faithful to the author's intent, not only the semantic content of the article.
no code implementations • 8 Oct 2023 • Xiao Pu, Jingyu Zhang, Xiaochuang Han, Yulia Tsvetkov, Tianxing He
The rampant proliferation of large language models, fluent enough to generate text indistinguishable from human-written language, gives unprecedented importance to the detection of machine-generated text.
1 code implementation • 8 Aug 2023 • Xiaochuang Han
In this note, we explore inference-time alignment through in-context learning.
no code implementations • 26 Jun 2023 • Xiaochuang Han, Daniel Simig, Todor Mihaylov, Yulia Tsvetkov, Asli Celikyilmaz, Tianlu Wang
We observe that a continued pretraining on this small subset significantly improves the model's ICL ability, by up to 18%.
no code implementations • 24 May 2023 • Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Scott Wen-tau Yih
Language models (LMs) often struggle to pay enough attention to the input context, and generate texts that are unfaithful or contain hallucinations.
no code implementations • 24 May 2023 • Xiaochuang Han, Sachin Kumar, Yulia Tsvetkov, Marjan Ghazvininejad
Diffusion-based language models are emerging as a promising alternative to autoregressive LMs: they approach the competence of autoregressive LMs while offering nuanced controllability at inference time.
2 code implementations • NeurIPS 2023 • Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, Yulia Tsvetkov
We then propose Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based approaches to enhance LLMs in solving natural language graph problems.
no code implementations • 20 Dec 2022 • Weijia Shi, Xiaochuang Han, Hila Gonen, Ari Holtzman, Yulia Tsvetkov, Luke Zettlemoyer
Large language models can perform new tasks in a zero-shot fashion, given natural language prompts that specify the desired behavior.
1 code implementation • 31 Oct 2022 • Xiaochuang Han, Sachin Kumar, Yulia Tsvetkov
Despite the growing success of diffusion models in continuous-valued domains (e. g., images), similar efforts for discrete domains such as text have yet to match the performance of autoregressive language models.
no code implementations • 25 May 2022 • Xiaochuang Han, Yulia Tsvetkov
Large pretrained language models have been performing increasingly well in a variety of downstream tasks via prompting.
1 code implementation • Findings (EMNLP) 2021 • Xiaochuang Han, Yulia Tsvetkov
Among the most critical limitations of deep learning NLP models are their lack of interpretability, and their reliance on spurious correlations.
1 code implementation • EMNLP 2020 • Xiaochuang Han, Yulia Tsvetkov
Modern toxic speech detectors are incompetent in recognizing disguised offensive language, such as adversarial attacks that deliberately avoid known toxic lexicons, or manifestations of implicit bias.
1 code implementation • ACL 2020 • Xiaochuang Han, Byron C. Wallace, Yulia Tsvetkov
In this work, we investigate the use of influence functions for NLP, providing an alternative approach to interpreting neural text classifiers.
1 code implementation • NAACL 2019 • Xiaochuang Han, Eunsol Choi, Chenhao Tan
Understanding the dynamics of international politics is important yet challenging for civilians.
1 code implementation • IJCNLP 2019 • Xiaochuang Han, Jacob Eisenstein
To address this scenario, we propose domain-adaptive fine-tuning, in which the contextualized embeddings are adapted by masked language modeling on text from the target domain.
no code implementations • CL 2018 • Scott F. Kiesling, Umashanthi Pavalanathan, Jim Fitzpatrick, Xiaochuang Han, Jacob Eisenstein
Theories of interactional stancetaking have been put forward as holistic accounts, but until now, these theories have been applied only through detailed qualitative analysis of (portions of) a few individual conversations.
no code implementations • 18 Sep 2018 • Umashanthi Pavalanathan, Xiaochuang Han, Jacob Eisenstein
Do NPOV corrections encourage editors to adopt this style?