no code implementations • 8 May 2024 • Yongheng Zhang, Tingwen Du, Yunshan Ma, Xiang Wang, Yi Xie, Guozheng Yang, Yuliang Lu, Ee-Chien Chang
Thus, we propose a fully automatic LLM-based framework to construct attack knowledge graphs named: AttacKG+.
1 code implementation • 24 Mar 2024 • Siyuan Liang, Wei Wang, Ruoyu Chen, Aishan Liu, Boxi Wu, Ee-Chien Chang, Xiaochun Cao, DaCheng Tao
This paper aims to bridge this gap by conducting a comprehensive review and analysis of object detectors in open environments.
no code implementations • 24 Mar 2024 • Siyuan Liang, Kuanrong Liu, Jiajun Gong, Jiawei Liang, Yuan Xun, Ee-Chien Chang, Xiaochun Cao
In this paper, we explore the possibility of a less-cost defense from the perspective of model unlearning, that is, whether the model can be made to quickly \textbf{u}nlearn \textbf{b}ackdoor \textbf{t}hreats (UBT) by constructing a small set of poisoned samples.
no code implementations • 21 Feb 2024 • Jiawei Liang, Siyuan Liang, Man Luo, Aishan Liu, Dongchen Han, Ee-Chien Chang, Xiaochun Cao
Nevertheless, the frozen visual encoder in autoregressive VLMs imposes constraints on the learning of conventional image triggers.
no code implementations • 21 Feb 2024 • Xiaoxia Li, Siyuan Liang, Jiyi Zhang, Han Fang, Aishan Liu, Ee-Chien Chang
Large Language Models (LLMs), used in creative writing, code generation, and translation, generate text based on input sequences but are vulnerable to jailbreak attacks, where crafted prompts induce harmful outputs.
no code implementations • 7 Feb 2024 • Jiyi Zhang, Han Fang, Ee-Chien Chang
In forensic investigations of machine learning models, techniques that determine a model's data domain play an essential role, with prior work relying on large-scale corpora like ImageNet to approximate the target model's domain.
no code implementations • 20 Nov 2023 • Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses and introduces the \emph{\toolns} attack, which is resistant to backdoor detection and model fine-tuning defenses.
no code implementations • 18 Nov 2023 • Jiayang Liu, Siyu Zhu, Siyuan Liang, Jie Zhang, Han Fang, Weiming Zhang, Ee-Chien Chang
Various techniques have emerged to enhance the transferability of adversarial attacks for the black-box scenario.
no code implementations • 21 Aug 2023 • Wesley Tann, Yuancheng Liu, Jun Heng Sim, Choon Meng Seah, Ee-Chien Chang
This research investigates the effectiveness of LLMs, particularly in the realm of CTF challenges and questions.
no code implementations • 2 Jun 2023 • Jiyi Zhang, Han Fang, Ee-Chien Chang
This induces different adversarial regions in different copies, making adversarial samples generated on one copy not replicable on others.
no code implementations • 10 May 2023 • Jiyi Zhang, Han Fang, Hwee Kuan Lee, Ee-Chien Chang
Our goal is to select a set of samples from the corpus for the given model.
no code implementations • ICCV 2023 • Han Fang, Jiyi Zhang, Yupeng Qiu, Ke Xu, Chengfang Fang, Ee-Chien Chang
In this paper, we take the role of investigators who want to trace the attack and identify the source, that is, the particular model which the adversarial examples are generated from.
no code implementations • 1 Dec 2022 • Ziqi Yang, Lijin Wang, Da Yang, Jie Wan, Ziming Zhao, Ee-Chien Chang, Fan Zhang, Kui Ren
Besides, our further experiments show that PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks.
no code implementations • 14 Oct 2022 • Wesley Joon-Wie Tann, Akhil Vuputuri, Ee-Chien Chang
In this paper, we want to obtain a generative model that, given the early transactions history (first quarter Q1) of a newly minted collection, generates subsequent transactions (quarters Q2, Q3, Q4), where the generative model is trained using the transaction history of a few mature collections.
no code implementations • 30 Nov 2021 • Jiyi Zhang, Han Fang, Wesley Joon-Wie Tann, Ke Xu, Chengfang Fang, Ee-Chien Chang
We point out that by distributing different copies of the model to different buyers, we can mitigate the attack such that adversarial samples found on one copy would not work on another copy.
no code implementations • 12 Dec 2020 • Wesley Joon-Wie Tann, Jackie Tan Jin Wei, Joanna Purba, Ee-Chien Chang
We also introduce a machine learning optimization problem that aims to sift out the attacks using ${\mathcal N}$ and ${\mathcal M}$.
no code implementations • 28 Sep 2020 • Wesley Joon-Wie Tann, Ee-Chien Chang, Bryan Hooi
We introduce the problem of explaining graph generation, formulated as controlling the generative process to produce desired graphs with explainable structures.
no code implementations • 6 Jun 2020 • Wesley Joon-Wie Tann, Ee-Chien Chang, Bryan Hooi
Given an observed graph and some user-specified Markov model parameters, ${\rm S{\small HADOW}C{\small AST}}$ controls the conditions to generate desired graphs.
no code implementations • 8 May 2020 • Ziqi Yang, Bin Shao, Bohan Xuan, Ee-Chien Chang, Fan Zhang
Neural networks are susceptible to data inference attacks such as the model inversion attack and the membership inference attack, where the attacker could infer the reconstruction and the membership of a data sample from the confidence scores predicted by the target classifier.
no code implementations • ICLR 2020 • Connie Kou, Hwee Kuan Lee, Ee-Chien Chang, Teck Khim Ng
Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images.
no code implementations • 5 Mar 2020 • Jiyi Zhang, Ee-Chien Chang, Hwee Kuan Lee
Many machine learning adversarial attacks find adversarial samples of a victim model ${\mathcal M}$ by following the gradient of some attack objective functions, either explicitly or implicitly.
no code implementations • 14 Jun 2019 • Ziqi Yang, Hung Dang, Ee-Chien Chang
In this paper, we show that distillation, a widely used transformation technique, is a quite effective attack to remove watermark embedded by existing algorithms.
no code implementations • 1 Jun 2019 • Connie Kou, Hwee Kuan Lee, Ee-Chien Chang, Teck Khim Ng
Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images.
1 code implementation • 22 Feb 2019 • Ziqi Yang, Ee-Chien Chang, Zhenkai Liang
In this work, we investigate the model inversion problem in the adversarial settings, where the adversary aims at inferring information about the target model's training data and test data from the model's prediction values.
3 code implementations • 2 Apr 2018 • Hung Dang, Tien Tuan Anh Dinh, Dumitrel Loghin, Ee-Chien Chang, Qian Lin, Beng Chin Ooi
In this work, we take a principled approach to apply sharding, which is a well-studied and proven technique to scale out databases, to blockchain systems in order to improve their transaction throughput at scale.
Distributed, Parallel, and Cluster Computing Cryptography and Security Databases
no code implementations • 13 Feb 2018 • Jiyi Zhang, Hung Dang, Hwee Kuan Lee, Ee-Chien Chang
We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E that embodies an "inverse mapping" that encodes a data sample into a latent code vector.