no code implementations • 30 May 2024 • Jia Li, Ge Li, YunFei Zhao, Yongmin Li, Huanyu Liu, Hao Zhu, Lecheng Wang, Kaibo Liu, Zheng Fang, Lanshen Wang, Jiazheng Ding, Xuanming Zhang, Yuqi Zhu, Yihong Dong, Zhi Jin, Binhua Li, Fei Huang, Yongbin Li
Our experiments reveal these LLMs' coding abilities in real-world code repositories.
no code implementations • 9 May 2024 • Yikun Ma, Dandan Zhan, Zhi Jin
Notably, guided only by a text prompt, FastScene can generate a 3D scene within a mere 15 minutes, which is at least one hour faster than state-of-the-art methods, making it a paradigm for user-friendly scene generation.
1 code implementation • 26 Apr 2024 • Zhengwei Tao, Zhi Jin, Yifan Zhang, Xiancai Chen, Xiaoying Bai, Yue Fang, Haiyan Zhao, Jia Li, Chongyang Tao
Based on these findings, we introduce two methods to guide the LLMs to utilize the event schema knowledge.
1 code implementation • 25 Apr 2024 • Zongyao He, Zhi Jin
To tackle this problem, we propose a novel Latent Modulated Function (LMF), which decouples the HR-HD decoding process into shared latent decoding in LR-HD space and independent rendering in HR Low-Dimensional (LD) space, thereby realizing the first computational optimal paradigm of continuous image representation.
1 code implementation • 23 Apr 2024 • Zhen Yang, Fang Liu, Zhongxing Yu, Jacky Wai Keung, Jia Li, Shuo Liu, Yifan Hong, Xiaoxue Ma, Zhi Jin, Ge Li
This paper investigates diverse LLMs and learning-based transpilers for automated code translation tasks, finding that: although certain LLMs have outperformed current transpilers, they still have some accuracy issues, where most of the failures are induced by a lack of comprehension of source programs, missing clear instructions on I/O types in translation, and ignoring discrepancies between source and target programs.
3 code implementations • 22 Apr 2024 • Xiaoning Liu, Zongwei Wu, Ao Li, Florin-Alexandru Vasluianu, Yulun Zhang, Shuhang Gu, Le Zhang, Ce Zhu, Radu Timofte, Zhi Jin, Hongjun Wu, Chenxi Wang, Haitao Ling, Yuanhao Cai, Hao Bian, Yuxin Zheng, Jing Lin, Alan Yuille, Ben Shao, Jin Guo, Tianli Liu, Mohao Wu, Yixu Feng, Shuo Hou, Haotian Lin, Yu Zhu, Peng Wu, Wei Dong, Jinqiu Sun, Yanning Zhang, Qingsen Yan, Wenbin Zou, Weipeng Yang, Yunxiang Li, Qiaomu Wei, Tian Ye, Sixiang Chen, Zhao Zhang, Suiyi Zhao, Bo wang, Yan Luo, Zhichao Zuo, Mingshen Wang, Junhu Wang, Yanyan Wei, Xiaopeng Sun, Yu Gao, Jiancheng Huang, Hongming Chen, Xiang Chen, Hui Tang, Yuanbin Chen, Yuanbo Zhou, Xinwei Dai, Xintao Qiu, Wei Deng, Qinquan Gao, Tong Tong, Mingjia Li, Jin Hu, Xinyu He, Xiaojie Guo, sabarinathan, K Uma, A Sasithradevi, B Sathya Bama, S. Mohamed Mansoor Roomi, V. Srivatsav, Jinjuan Wang, Long Sun, Qiuying Chen, Jiahong Shao, Yizhi Zhang, Marcos V. Conde, Daniel Feijoo, Juan C. Benito, Alvaro García, Jaeho Lee, Seongwan Kim, Sharif S M A, Nodirkhuja Khujaev, Roman Tsoy, Ali Murtaza, Uswah Khairuddin, Ahmad 'Athif Mohd Faudzi, Sampada Malagi, Amogh Joshi, Nikhil Akalwadi, Chaitra Desai, Ramesh Ashok Tabib, Uma Mudenagudi, Wenyi Lian, Wenjing Lian, Jagadeesh Kalyanshetti, Vijayalaxmi Ashok Aralikatti, Palani Yashaswini, Nitish Upasi, Dikshit Hegde, Ujwala Patil, Sujata C, Xingzhuo Yan, Wei Hao, Minghan Fu, Pooja Choksy, Anjali Sarvaiya, Kishor Upla, Kiran Raja, Hailong Yan, Yunkai Zhang, Baiang Li, Jingyi Zhang, Huan Zheng
This paper reviews the NTIRE 2024 low light image enhancement challenge, highlighting the proposed solutions and results.
1 code implementation • 22 Apr 2024 • Zhengwei Tao, Ting-En Lin, Xiancai Chen, Hangyu Li, Yuchuan Wu, Yongbin Li, Zhi Jin, Fei Huang, DaCheng Tao, Jingren Zhou
To address this issue, self-evolution approaches that enable LLM to autonomously acquire, refine, and learn from experiences generated by the model itself are rapidly growing.
no code implementations • 18 Apr 2024 • Zhengwei Tao, Xiancai Chen, Zhi Jin, Xiaoying Bai, Haiyan Zhao, Yiwei Lou
We conduct extensive experiments on event reasoning tasks on several datasets.
1 code implementation • 16 Apr 2024 • Zhengwei Tao, Zhi Jin, Junqiang Huang, Xiancai Chen, Xiaoying Bai, Haiyan Zhao, Yifan Zhang, Chongyang Tao
Finally, we observe that models trained in this way are still struggling to fully comprehend event evolution.
1 code implementation • 31 Mar 2024 • Jia Li, Ge Li, Xuanming Zhang, Yihong Dong, Zhi Jin
Existing benchmarks demonstrate poor alignment with real-world code repositories and are insufficient to evaluate the coding abilities of LLMs.
no code implementations • 29 Feb 2024 • Xue Jiang, Yihong Dong, Zhi Jin, Ge Li
Specifically, SEED involves identifying error code generated by LLMs, employing Self-revise for code revision, optimizing the model with revised code, and iteratively adapting the process for continuous improvement.
1 code implementation • 24 Feb 2024 • Yihong Dong, Xue Jiang, Huanyu Liu, Zhi Jin, Bin Gu, Mengfei Yang, Ge Li
In this paper, we propose CDD, which stands for Contamination Detection via output Distribution for LLMs.
no code implementations • 12 Jan 2024 • Jia Li, Ge Li, YunFei Zhao, Yongmin Li, Zhi Jin, Hao Zhu, Huanyu Liu, Kaibo Liu, Lecheng Wang, Zheng Fang, Lanshen Wang, Jiazheng Ding, Xuanming Zhang, Yihong Dong, Yuqi Zhu, Bin Gu, Mengfei Yang
Compared to previous benchmarks, DevEval aligns to practical projects in multiple dimensions, e. g., real program distributions, sufficient dependencies, and enough-scale project contexts.
no code implementations • 11 Jan 2024 • Chengfeng Dou, Zhi Jin, Wenpin Jiao, Haiyan Zhao, Yongqiang Zhao, Zhenwei Tao
The use of large language models in medical dialogue generation has garnered significant attention, with a focus on improving response quality and fluency.
1 code implementation • 22 Dec 2023 • Rongao Li, Jie Fu, Bo-Wen Zhang, Tao Huang, Zhihong Sun, Chen Lyu, Guang Liu, Zhi Jin, Ge Li
Moreover, each TACO problem includes several fine-grained labels such as task topics, algorithms, programming skills, and difficulty levels, providing a more precise reference for the training and evaluation of code generation models.
Ranked #1 on Code Generation on TACO-Code
1 code implementation • 18 Dec 2023 • Zhi Jin, Sheng Xu, Xiang Zhang, Tianze Ling, Nanqing Dong, Wanli Ouyang, Zhiqiang Gao, Cheng Chang, Siqi Sun
De novo peptide sequencing from mass spectrometry (MS) data is a critical task in proteomics research.
no code implementations • 1 Nov 2023 • Zejun Wang, Jia Li, Ge Li, Zhi Jin
To help human users refine their requirements and improve large language models' code generation performances, we propose ChatCoder: a method to refine the requirements via chatting with large language models.
no code implementations • 31 Oct 2023 • Yongqiang Zhao, Zhenyu Li, Zhi Jin, Feng Zhang, Haiyan Zhao, Chengfeng Dou, Zhengwei Tao, Xinhai Xu, Donghong Liu
The Multi-Modal Large Language Model (MLLM) refers to an extension of the Large Language Model (LLM) equipped with the capability to receive and infer multi-modal data.
no code implementations • 15 Oct 2023 • Ge Li, Chongyang Tao, Jia Li, Huangzhao Zhang, Fang Liu, Zhi Jin
Large language models (LLMs) have shown impressive in-context learning (ICL) ability in code generation.
1 code implementation • 6 Sep 2023 • Yuqi Zhu, Ge Li, YunFei Zhao, Jia Li, Zhi Jin, Hong Mei
With an analysis of loss distributions of code tokens, we find that code tokens can be divided into two categories: challenging tokens that are difficult to predict and confident tokens that can be easily inferred.
1 code implementation • ICCV 2023 • Yuwei Qiu, Kaihao Zhang, Chenxi Wang, Wenhan Luo, Hongdong Li, Zhi Jin
To address this issue, we propose a new Transformer variant, which applies the Taylor expansion to approximate the softmax-attention and achieves linear computational complexity.
1 code implementation • 26 Aug 2023 • Chongyang Tao, Zhi Jin, Fang Liu, Jia Li, Ge Li
In this paper, we propose a novel method named ZC3 for Zero-shot Cross-language Code Clone detection.
no code implementations • 26 Aug 2023 • Jia Li, Yongmin Li, Ge Li, Xing Hu, Xin Xia, Zhi Jin
Besides the patternized words, a code summary also contains important keywords, which are the key to reflecting the functionality of the code.
no code implementations • 19 Aug 2023 • Yihong Dong, Kangcheng Luo, Xue Jiang, Zhi Jin, Ge Li
Large language models (LLMs) have showcased remarkable potential across various tasks by conditioning on prompts.
1 code implementation • 6 Aug 2023 • Chenxi Wang, Hongjun Wu, Zhi Jin
In the first stage, we improve the lightness of low-light images by estimating the amplitude transform map in the Fourier space.
no code implementations • 6 Aug 2023 • Chenxi Wang, Zhi Jin
The colorization sub-task is accomplished by regarding the chrominance of the low-light image as color guidance like the user-guide image colorization.
no code implementations • 29 Jun 2023 • Hao Chen, Zhi Jin
Hence, in this work, we propose a novel residual recurrent multi-wavelet convolutional neural network R2-MWCNN learned in the frequency domain that can simultaneously increase the image contrast and reduce noise signals well.
1 code implementation • 21 Jun 2023 • Zongyao He, Zhi Jin
We further propose a Coarse-to-Fine Multilayer Perceptron (C2F-MLP) to perform decoding with dynamic coordinate slicing, where the number of coordinates in each slice varies as the scale factor varies.
no code implementations • 24 May 2023 • Zhengwei Tao, Zhi Jin, Xiaoying Bai, Haiyan Zhao, Yanlin Feng, Jia Li, Wenpeng Hu
In this paper, we propose an overarching framework for event semantic processing, encompassing understanding, reasoning, and prediction, along with their fine-grained aspects.
no code implementations • 19 May 2023 • Chengfeng Dou, Zhi Jin, Wenping Jiao, Haiyan Zhao, Zhenwei Tao, Yongqiang Zhao
PlugMed is equipped with two modules, the prompt generation (PG) module and the response ranking (RR) module, to enhances LLMs' dialogue strategies for improving the specificity of the dialogue.
no code implementations • 11 May 2023 • Jia Li, Ge Li, Yongmin Li, Zhi Jin
In this paper, we propose Structured CoTs (SCoTs) and present a novel prompting technique for code generation, named SCoT prompting.
1 code implementation • 6 May 2023 • Kechi Zhang, Zhuo Li, Jia Li, Ge Li, Zhi Jin
Inspired by the process of human programming, we propose a generate-and-edit approach named Self-Edit that utilizes execution results of the generated code from LLMs to improve the code quality on the competitive programming task.
no code implementations • 31 Mar 2023 • Jia Li, YunFei Zhao, Yongmin Li, Ge Li, Zhi Jin
A key question is how to make prompts (i. e., Prompting Techniques).
1 code implementation • 14 Mar 2023 • Kechi Zhang, Zhuo Li, Zhi Jin, Ge Li
Furthermore, we propose the Hierarchy Transformer (HiT), a simple but effective sequence model to incorporate the complete hierarchical embeddings of source code into a Transformer model.
no code implementations • CVPR 2023 • Zihang Lin, Chaolei Tan, Jian-Fang Hu, Zhi Jin, Tiancai Ye, Wei-Shi Zheng
The static stream performs cross-modal understanding in a single frame and learns to attend to the target object spatially according to intra-frame visual cues like object appearances.
1 code implementation • 3 Nov 2022 • Haojie Zhang, Ge Li, Jia Li, Zhongjin Zhang, Yuqi Zhu, Zhi Jin
Large-scale pre-trained language models have achieved impressive results on a wide range of downstream tasks recently.
1 code implementation • 2 Nov 2022 • Yihong Dong, Xue Jiang, Yuchen Liu, Ge Li, Zhi Jin
CodePAD can leverage existing sequence-based models, and we show that it can achieve 100\% grammatical correctness percentage on these benchmark datasets.
no code implementations • 31 Oct 2022 • Jia Li, Zhuo Li, Huangzhao Zhang, Ge Li, Zhi Jin, Xing Hu, Xin Xia
The attackers aim to inject insidious backdoors into models by poisoning the training data with poison samples.
1 code implementation • 31 Oct 2022 • Jia Li, Ge Li, Zhuo Li, Zhi Jin, Xing Hu, Kechi Zhang, Zhiyi Fu
Pre-trained models are first pre-trained with pre-training tasks and fine-tuned with the code editing task.
no code implementations • 22 Aug 2022 • Yihong Dong, Ge Li, Xue Jiang, Zhi Jin
To evaluate the effectiveness of our proposed loss, we implement and train an Antecedent Prioritized Tree-based code generation model called APT.
1 code implementation • 18 Aug 2022 • Wenhan Wang, Kechi Zhang, Ge Li, Shangqing Liu, Anran Li, Zhi Jin, Yang Liu
Learning vector representations for programs is a critical step in applying deep learning techniques for program understanding tasks.
no code implementations • 18 Jul 2022 • Kechi Zhang, Ge Li, Zhi Jin
In the field of source code processing, the transformer-based representation models have shown great powerfulness and have achieved state-of-the-art (SOTA) performance in many tasks.
no code implementations • 6 Jul 2022 • Zihang Lin, Chaolei Tan, Jian-Fang Hu, Zhi Jin, Tiancai Ye, Wei-Shi Zheng
The static branch performs cross-modal understanding in a single frame and learns to localize the target object spatially according to intra-frame visual cues like object appearances.
Ranked #2 on Spatio-Temporal Video Grounding on HC-STVG2
1 code implementation • IEEE Transactions on Multimedia 2022 • Zhi Jin, Junjia Huang, Wenjin Wang, Aolin Xiong, Xiaojun Tan
In this case, the widely used Body Mass Index (BMI) which is associated with body height and weight can be employed as a measure of weight to indicate the health conditions.
1 code implementation • NeurIPS 2021 • Han Peng, Ge Li, Wenhan Wang, YunFei Zhao, Zhi Jin
Learning distributed representation of source code requires modelling its syntax and semantics.
no code implementations • 20 Nov 2021 • Zhehao Zhao, Bo Yang, Ge Li, Huai Liu, Zhi Jin
Based on that, we also designed a neural network that depends on the graph attention mechanism. Specifically, we introduced the syntactic structural of the basic block, i. e., its corresponding AST, in source code model to provide sufficient information and fill the gap.
no code implementations • 5 Feb 2021 • Wenjie Chu, Wei zhang, Haiyan Zhao, Zhi Jin, Hong Mei
Self-assembly plays an essential role in many natural processes, involving the formation and evolution of living or non-living structures, and shows potential applications in many emerging domains.
Multiagent Systems Distributed, Parallel, and Cluster Computing Robotics
no code implementations • 8 Dec 2020 • Kechi Zhang, Wenhan Wang, Huangzhao Zhang, Ge Li, Zhi Jin
To address the information of node and edge types, we bring the idea of heterogeneous graphs to learning on source code and present a new formula of building heterogeneous program graphs from ASTs with additional type information for nodes and edges.
1 code implementation • 9 Oct 2020 • Bolin Wei, Yongmin Li, Ge Li, Xin Xia, Zhi Jin
Inspired by the IR-based and template-based approaches, in this paper, we propose a neural comment generation approach where we use the existing comments of similar code snippets as exemplars to guide comment generation.
no code implementations • 18 Sep 2020 • Wenhan Wang, Sijie Shen, Ge Li, Zhi Jin
In this paper, we take a further step and discuss the probability of directly completing a whole line of code instead of a single token.
no code implementations • 3 May 2020 • Kai Zhang, Shuhang Gu, Radu Timofte, Taizhang Shang, Qiuju Dai, Shengchen Zhu, Tong Yang, Yandong Guo, Younghyun Jo, Sejong Yang, Seon Joo Kim, Lin Zha, Jiande Jiang, Xinbo Gao, Wen Lu, Jing Liu, Kwangjin Yoon, Taegyun Jeon, Kazutoshi Akita, Takeru Ooba, Norimichi Ukita, Zhipeng Luo, Yuehan Yao, Zhenyu Xu, Dongliang He, Wenhao Wu, Yukang Ding, Chao Li, Fu Li, Shilei Wen, Jianwei Li, Fuzhi Yang, Huan Yang, Jianlong Fu, Byung-Hoon Kim, JaeHyun Baek, Jong Chul Ye, Yuchen Fan, Thomas S. Huang, Junyeop Lee, Bokyeung Lee, Jungki Min, Gwantae Kim, Kanghyu Lee, Jaihyun Park, Mykola Mykhailych, Haoyu Zhong, Yukai Shi, Xiaojun Yang, Zhijing Yang, Liang Lin, Tongtong Zhao, Jinjia Peng, Huibing Wang, Zhi Jin, Jiahao Wu, Yifu Chen, Chenming Shang, Huanrong Zhang, Jeongki Min, Hrishikesh P. S, Densen Puthussery, Jiji C. V
This paper reviews the NTIRE 2020 challenge on perceptual extreme super-resolution with focus on proposed solutions and results.
1 code implementation • 20 Feb 2020 • Wenhan Wang, Ge Li, Bo Ma, Xin Xia, Zhi Jin
As far as we have concerned, we are the first to apply graph neural networks on the domain of code clone detection.
2 code implementations • NeurIPS 2019 • Bolin Wei, Ge Li, Xin Xia, Zhiyi Fu, Zhi Jin
Code summarization (CS) and code generation (CG) are two crucial tasks in the field of automatic software development.
no code implementations • 16 Sep 2019 • Fang Liu, Ge Li, Bolin Wei, Xin Xia, Zhiyi Fu, Zhi Jin
To enable the knowledge sharing between related tasks, we creatively propose a Multi-Task Learning (MTL) framework to learn two related tasks in code completion jointly.
no code implementations • 28 Nov 2018 • Bo Shen, Wei zhang, Haiyan Zhao, Zhi Jin, Yanhong Wu
And through feedback, each player is provided with personalized feedback information based on the current COG and the player's exploration result, in order to accelerate his/her puzzle-solving process.
1 code implementation • 7 Oct 2018 • Kai Cui, Zhi Jin, Eckehard Steinbach
Color demosaicking (CDM) is a critical first step for the acquisition of high-quality RGB images with single chip cameras.
no code implementations • 11 Jun 2018 • Shuming Jiao, Zhi Jin, Chenliang Chang, Changyuan Zhou, Wenbin Zou, Xia Li
It is a critical issue to reduce the enormous amount of data in the processing, storage and transmission of a hologram in digital format.
no code implementations • 6 Dec 2017 • Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, Zhi Jin
This paper addresses the question: Why do neural dialog systems generate short and meaningless replies?
no code implementations • LREC 2018 • Zhao Meng, Lili Mou, Zhi Jin
Neural network-based dialog systems are attracting increasing attention in both academia and industry.
1 code implementation • 22 Mar 2017 • Zhao Meng, Lili Mou, Zhi Jin
Speaker change detection (SCD) is an important task in dialog modeling.
no code implementations • ICML 2017 • Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin
Building neural networks to query a knowledge base (a table) with natural language is an emerging research topic in deep learning.
1 code implementation • ACL 2016 • Yunchuan Chen, Lili Mou, Yan Xu, Ge Li, Zhi Jin
Such approaches are time- and memory-intensive because of the large numbers of parameters for word embeddings and the output layer.
no code implementations • 6 Oct 2016 • Wenhao Huang, Ge Li, Zhi Jin
Knowledge base completion aims to infer new relations from existing information.
no code implementations • COLING 2016 • Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin
Using neural networks to generate replies in human-computer dialogue systems is attracting increasing attention over the past few years.
no code implementations • EMNLP 2016 • Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, Zhi Jin
Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain.
no code implementations • COLING 2016 • Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin
However, existing neural networks for relation classification are usually of shallow architectures (e. g., one-layer convolutional neural networks or recurrent networks).
Ranked #2 on Relation Classification on SemEval 2010 Task 8
no code implementations • ACL 2016 • Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, Zhi Jin
In this paper, we propose the TBCNN-pair model to recognize entailment and contradiction between two sentences.
Ranked #87 on Natural Language Inference on SNLI
no code implementations • 21 Dec 2015 • Lili Mou, Rui Yan, Ge Li, Lu Zhang, Zhi Jin
Provided a specific word, we use RNNs to generate previous words and future words, either simultaneously or asynchronously, resulting in two model variants.
no code implementations • 25 Oct 2015 • Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin
This paper envisions an end-to-end program generation scenario using recurrent neural networks (RNNs): Users can express their intention in natural language; an RNN then automatically generates corresponding code in a characterby-by-character fashion.
no code implementations • EMNLP 2015 • Hao Peng, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin
This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP.
no code implementations • 15 Aug 2015 • Xu Yan, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin
Relation classification is an important research arena in the field of natural language processing (NLP).
Ranked #4 on Relation Classification on SemEval 2010 Task 8
no code implementations • 15 Jun 2015 • Lili Mou, Ran Jia, Yan Xu, Ge Li, Lu Zhang, Zhi Jin
Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new research topic, as lightweight neural networks with high performance are particularly in need in various resource-restricted systems.
no code implementations • EMNLP 2015 • Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, Zhi Jin
This paper proposes a tree-based convolutional neural network (TBCNN) for discriminative sentence modeling.
Ranked #7 on Text Classification on TREC-6
8 code implementations • 18 Sep 2014 • Lili Mou, Ge Li, Lu Zhang, Tao Wang, Zhi Jin
Programming language processing (similar to natural language processing) is a hot research topic in the field of software engineering; it has also aroused growing interest in the artificial intelligence community.
1 code implementation • 11 Sep 2014 • Lili Mou, Ge Li, Yuxuan Liu, Hao Peng, Zhi Jin, Yan Xu, Lu Zhang
In this pioneering paper, we propose the "coding criterion" to build program vector representations, which are the premise of deep learning for program analysis.