2 code implementations • 20 Feb 2024 • Qianqian Xie, Weiguang Han, Zhengyu Chen, Ruoyu Xiang, Xiao Zhang, Yueru He, Mengxi Xiao, Dong Li, Yongfu Dai, Duanyu Feng, Yijing Xu, Haoqiang Kang, Ziyan Kuang, Chenhan Yuan, Kailai Yang, Zheheng Luo, Tianlin Zhang, Zhiwei Liu, Guojun Xiong, Zhiyang Deng, Yuechen Jiang, Zhiyuan Yao, Haohang Li, Yangyang Yu, Gang Hu, Jiajia Huang, Xiao-Yang Liu, Alejandro Lopez-Lira, Benyou Wang, Yanzhao Lai, Hao Wang, Min Peng, Sophia Ananiadou, Jimin Huang
This along with the rapid development of LLMs, highlights the urgent need for a systematic financial evaluation benchmark for LLMs.
no code implementations • 16 Feb 2024 • Haoqiang Kang, Terra Blevins, Luke Zettlemoyer
While many automatic hallucination detection techniques have been proposed for English texts, their effectiveness in multilingual contexts remains unexplored.
no code implementations • 27 Nov 2023 • Haoqiang Kang, Xiao-Yang Liu
In this paper, we provide an empirical examination of LLMs' hallucination behaviors in financial tasks.
1 code implementation • 15 Nov 2023 • Haoqiang Kang, Juntong Ni, Huaxiu Yao
Large Language Models (LLMs) have demonstrated remarkable proficiency in generating fluent text.
no code implementations • 26 Apr 2023 • Haoqiang Kang, Terra Blevins, Luke Zettlemoyer
To better understand this contrast, we present a new study investigating how well PLMs capture cross-lingual word sense with Contextual Word-Level Translation (C-WLT), an extension of word-level translation that prompts the model to translate a given word in context.