no code implementations • 27 May 2024 • Dixuan Wang, Yanda Li, Junyuan Jiang, Zepeng Ding, Guochao Jiang, Jiaqing Liang, Deqing Yang
Our empirical results reveal that our ADT is highly effective on challenging the tokenization of leading LLMs, including GPT-4o, Llama-3, Qwen2. 5-max and so on, thus degrading these LLMs' capabilities.
no code implementations • 26 May 2024 • Ziqin Luo, Haixia Han, Haokun Zhao, Guochao Jiang, Chengyu Du, Tingyun Li, Jiaqing Liang, Deqing Yang, Yanghua Xiao
Existing Large Language Models (LLMs) generate text through unidirectional autoregressive decoding methods to respond to various user queries.
no code implementations • 8 May 2024 • Guochao Jiang, Zepeng Ding, Yuchen Shi, Deqing Yang
To obtain optimal point entities for prompting LLMs, we also proposed a point entity selection method based on K-Means clustering.
no code implementations • 14 Apr 2024 • Guochao Jiang, Ziqin Luo, Yuchen Shi, Dixuan Wang, Jiaqing Liang, Deqing Yang
In recent years, the fine-tuned generative models have been proven more powerful than the previous tagging-based or span-based models on named entity recognition (NER) task.
no code implementations • 4 Apr 2024 • Yanda Li, Dixuan Wang, Jiaqing Liang, Guochao Jiang, Qianyu He, Yanghua Xiao, Deqing Yang
Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning.