no code implementations • 10 May 2024 • Mengjia Niu, Hao Li, Jie Shi, Hamed Haddadi, Fan Mo
Large language models (LLMs) have demonstrated remarkable capabilities across various domains, although their susceptibility to hallucination poses significant challenges for their deployment in critical areas such as healthcare.
no code implementations • 16 Apr 2024 • Haixia Han, Tingyun Li, Shisong Chen, Jie Shi, Chengyu Du, Yanghua Xiao, Jiaqing Liang, Xin Lin
Specifically, we first identify three key problems: (1) How to capture the inherent confidence of the LLM?
no code implementations • 11 Apr 2024 • Haokun Zhao, Haixia Han, Jie Shi, Chengyu Du, Jiaqing Liang, Yanghua Xiao
As world knowledge evolves and new task paradigms emerge, Large Language Models (LLMs) often fall short of meeting new demands due to knowledge deficiencies and outdated information.
no code implementations • 14 Jan 2024 • Haixia Han, Jiaqing Liang, Jie Shi, Qianyu He, Yanghua Xiao
In this paper, we introduce the \underline{I}ntrinsic \underline{S}elf-\underline{C}orrection (ISC) in generative language models, aiming to correct the initial output of LMs in a self-triggered manner, even for those small LMs with 6 billion parameters.
1 code implementation • 30 Nov 2023 • Jie Shi, Arno P. J. M. Siebes, Siamak Mehrkanoon
Thanks to the domain adaptation capability of the proposed model, the domain shift between the source and target domain is minimized.
1 code implementation • 16 Aug 2023 • Shengming Yin, Chenfei Wu, Jian Liang, Jie Shi, Houqiang Li, Gong Ming, Nan Duan
Our experiments validate the effectiveness of DragNUWA, demonstrating its superior performance in fine-grained control in video generation.
no code implementations • 1 Jun 2022 • Jie Shi, Chenfei Wu, Jian Liang, Xiang Liu, Nan Duan
Our work proposes a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.
no code implementations • 20 Apr 2022 • Bing Sun, Jun Sun, Hong Long Pham, Jie Shi
Results also show that thanks to the causality-based fault localization, CARE's repair focuses on the misbehavior and preserves the accuracy of the neural networks.
no code implementations • 29 Dec 2021 • Guoliang Dong, Jingyi Wang, Jun Sun, Sudipta Chattopadhyay, Xinyu Wang, Ting Dai, Jie Shi, Jin Song Dong
Furthermore, such attacks are impossible to eliminate, i. e., the adversarial perturbation is still possible after applying mitigation methods such as adversarial training.
1 code implementation • NeurIPS 2021 • Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.
1 code implementation • 5 Nov 2021 • Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
This paper provides a unified view to explain different adversarial attacks and defense methods, \emph{i. e.} the view of multi-order interactions between input variables of DNNs.
1 code implementation • 30 Oct 2021 • Lujia Shen, Shouling Ji, Xuhong Zhang, Jinfeng Li, Jing Chen, Jie Shi, Chengfang Fang, Jianwei Yin, Ting Wang
However, a pre-trained model with backdoor can be a severe threat to the applications.
no code implementations • 13 Apr 2021 • Xinyi Zhang, Chengfang Fang, Jie Shi
We find the effectiveness of existing techniques significantly affected by the absence of pre-trained models.
no code implementations • 12 Apr 2021 • An Zhang, Xiang Wang, Chengfang Fang, Jie Shi, Tat-Seng Chua, Zehua Chen
Gradient-based attribution methods can aid in the understanding of convolutional neural networks (CNNs).
1 code implementation • 12 Mar 2021 • Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.
no code implementations • 11 Mar 2021 • Haowen Liu, Ping Yi, Hsiao-Ying Lin, Jie Shi, Weidong Qiu
We propose DAFAR, a feedback framework that allows deep learning models to detect/purify adversarial examples in high effectiveness and universality, with low area and time overhead.
1 code implementation • 23 Feb 2021 • Xiao Li, Jianmin Li, Ting Dai, Jie Shi, Jun Zhu, Xiaolin Hu
A detection model based on the classification model EfficientNet-B7 achieved a top-1 accuracy of 53. 95%, surpassing previous state-of-the-art classification models trained on ImageNet, suggesting that accurate localization information can significantly boost the performance of classification models on ImageNet-A.
no code implementations • 13 Nov 2020 • Jie Shi, Brandon Foggo, Nanpeng Yu
Online power system event identification and classification is crucial to enhancing the reliability of transmission systems.
no code implementations • 28 Sep 2020 • Chang Liao, Yao Cheng, Chengfang Fang, Jie Shi
This paper aims to provide a thorough study on the effectiveness of the transformation-based ensemble defence for image classification and its reasons.
no code implementations • 21 Jun 2020 • Hao Zhang, Yiting Chen, Haotian Ma, Xu Cheng, Qihan Ren, Liyao Xiang, Jie Shi, Quanshi Zhang
Compared to the traditional neural network, the RENN uses d-ary vectors/tensors as features, in which each element is a d-ary number.
no code implementations • 18 Mar 2020 • Hao Zhang, Yi-Ting Chen, Liyao Xiang, Haotian Ma, Jie Shi, Quanshi Zhang
We propose a method to revise the neural network to construct the quaternion-valued neural network (QNN), in order to prevent intermediate-layer features from leaking input information.
no code implementations • 15 Feb 2019 • Brandon Foggo, Nanpeng Yu, Jie Shi, Yuanqi Gao
It then bounds this expected total variation as a function of the size of randomly sampled datasets in a fairly general setting, and without bringing in any additional dependence on model complexity.
no code implementations • CVPR 2016 • Jie Shi, Wen Zhang, Yalin Wang
Experimental results demonstrate that our method may be used as an effective shape index, which outperforms some other standard shape measures in our AD versus healthy control classification study.