1 code implementation • COLING 2022 • Li-Ming Zhan, Haowen Liang, Lu Fan, Xiao-Ming Wu, Albert Y.S. Lam
Comprehensive experiments on three real-world intent detection benchmark datasets demonstrate the high effectiveness of our proposed approach and its great potential in improving state-of-the-art methods for few-shot OOD intent detection.
1 code implementation • 2 Jun 2024 • Cong Wang, Jinshan Pan, Wei Wang, Gang Fu, Siyuan Liang, Mengzhu Wang, Xiao-Ming Wu, Jun Liu
To better improve feature representation in low-resolution space, we propose to build feature transformation from the high-resolution space to the low-resolution one.
no code implementations • 26 May 2024 • Yuankai Luo, Qijiong Liu, Lei Shi, Xiao-Ming Wu
We present a novel graph tokenization framework that generates structure-aware, semantic node identifiers (IDs) in the form of a short sequence of discrete codes, serving as symbolic representations of nodes.
1 code implementation • 6 May 2024 • Qijiong Liu, Xiaoyu Dong, Jiaren Xiao, Nuo Chen, Hengchang Hu, Jieming Zhu, Chenxu Zhu, Tetsuya Sakai, Xiao-Ming Wu
Finally, the survey analyzes the remaining challenges and anticipates future trends in VQ4Rec, including the challenges associated with the training of vector quantization, the opportunities presented by large language models, and emerging trends in multimodal recommender systems.
no code implementations • 24 Apr 2024 • Yan-Kang Wang, Chengyi Xing, Yi-Lin Wei, Xiao-Ming Wu, Wei-Shi Zheng
Thus, we introduce S2HGrasp, a framework composed of two key modules: the Global Perception module that globally perceives partial object point clouds, and the DiffuGrasp module designed to generate high-quality human grasps based on complex inputs that include scene points.
2 code implementations • 9 Apr 2024 • Li-Ming Zhan, Bo Liu, Xiao-Ming Wu
Out-of-distribution (OOD) detection plays a crucial role in ensuring the safety and reliability of deep neural networks in various applications.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +4
no code implementations • 31 Mar 2024 • Qijiong Liu, Jieming Zhu, Yanting Yang, Quanyu Dai, Zhaocheng Du, Xiao-Ming Wu, Zhou Zhao, Rui Zhang, Zhenhua Dong
The recent advancements in pretrained multimodal models offer new opportunities and challenges in developing content-aware recommender systems.
no code implementations • 27 Mar 2024 • Nuo Chen, Jiqun Liu, Hanpei Fang, Yuankai Luo, Tetsuya Sakai, Xiao-Ming Wu
This study examines the decoy effect's underexplored influence on user search interactions and methods for measuring information retrieval (IR) systems' vulnerability to this effect.
1 code implementation • 17 Mar 2024 • Dian Zheng, Xiao-Ming Wu, Shuzhou Yang, Jian Zhang, Jian-Fang Hu, Wei-Shi Zheng
Universal image restoration is a practical and potential computer vision task for real-world applications.
2 code implementations • 13 Mar 2024 • Qijiong Liu, Hengchang Hu, Jiahao Wu, Jieming Zhu, Min-Yen Kan, Xiao-Ming Wu
Incorporating item content information into click-through rate (CTR) prediction models remains a challenge, especially with the time and space constraints of industrial scenarios.
1 code implementation • 7 Mar 2024 • Qijiong Liu, Jieming Zhu, Quanyu Dai, Xiao-Ming Wu
Over recent years, news recommender systems have gained significant attention in both academia and industry, emphasizing the need for a standardized benchmark to evaluate and compare the performance of these systems.
no code implementations • 4 Nov 2023 • Nuo Chen, Jiqun Liu, Tetsuya Sakai, Xiao-Ming Wu
In recent years, the influence of cognitive effects and biases on users' thinking, behaving, and decision-making has garnered increasing attention in the field of interactive information retrieval.
1 code implementation • 23 Oct 2023 • Yujie Feng, Zexin Lu, Bo Liu, LiMing Zhan, Xiao-Ming Wu
In this study, we conduct an initial examination of ChatGPT's capabilities in DST.
no code implementations • 15 Oct 2023 • Jiahao Wu, Qijiong Liu, Hengchang Hu, Wenqi Fan, Shengcai Liu, Qing Li, Xiao-Ming Wu, Ke Tang
Notably, the condensation paradigm of this method is forward and free from iterative optimization on the synthesized dataset.
1 code implementation • 13 Oct 2023 • Xiangyu Zhao, Bo Liu, Qijiong Liu, Guangyuan Shi, Xiao-Ming Wu
We present EasyGen, an efficient model designed to enhance multimodal understanding and generation by harnessing the capabilities of diffusion models and large language models (LLMs), Unlike existing multimodal models that predominately depend on encoders like CLIP or ImageBind and need ample amounts of training data to bridge modalities, EasyGen leverages BiDiffuser, a bidirectional conditional diffusion model, to foster more efficient modality interactions.
1 code implementation • 6 Sep 2023 • Wenlong Zhang, Xiaohui Li, Xiangyu Chen, Yu Qiao, Xiao-Ming Wu, Chao Dong
In particular, we cluster the extensive degradation space to create a set of representative degradation cases, which serves as a comprehensive test set.
2 code implementations • 31 Aug 2023 • Qijiong Liu, Lu Fan, Jiaren Xiao, Jieming Zhu, Xiao-Ming Wu
Category information plays a crucial role in enhancing the quality and personalization of recommender systems.
no code implementations • 30 Aug 2023 • Dian Zheng, Xiao-Ming Wu, Zuhao Liu, Jingke Meng, Wei-Shi Zheng
Our method, termed DiffuVolume, considers the diffusion model as a cost volume filter, which will recurrently remove the redundant information from the cost volume.
no code implementations • 27 Aug 2023 • Qijiong Liu, Jieming Zhu, Quanyu Dai, Xiao-Ming Wu
Large pretrained language models (PLM) have become de facto news encoders in modern news recommender systems, due to their strong ability in comprehending textual content.
1 code implementation • 22 Aug 2023 • Yuankai Luo, Hongkang Li, Lei Shi, Xiao-Ming Wu
Empirically, we demonstrate that graph transformers with HDSE excel in graph classification, regression on 7 graph-level datasets, and node classification on 11 large-scale graphs, including those with up to a billion nodes.
Ranked #2 on Graph Classification on Peptides-func
1 code implementation • 20 Aug 2023 • Bo Liu, LiMing Zhan, Zexin Lu, Yujie Feng, Lei Xue, Xiao-Ming Wu
Out-of-distribution (OOD) detection plays a vital role in enhancing the reliability of machine learning (ML) models.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • ICCV 2023 • Xiao-Ming Wu, Dian Zheng, Zuhao Liu, Wei-Shi Zheng
The pioneering work BinaryConnect uses Straight Through Estimator (STE) to mimic the gradients of the sign function, but it also causes the crucial inconsistency problem.
1 code implementation • 12 Jun 2023 • Lu Fan, Jiashu Pu, Rongsheng Zhang, Xiao-Ming Wu
Motivated by this observation, we propose a Graph-based Negative sampling approach based on Neighborhood Overlap (GNNO) to exploit structural information hidden in user behaviors for negative mining.
1 code implementation • 10 Jun 2023 • Li Xu, Bo Liu, Ameer Hamza Khan, Lu Fan, Xiao-Ming Wu
With the availability of large-scale, comprehensive, and general-purpose vision-language (VL) datasets such as MSCOCO, vision-language pre-training (VLP) has become an active area of research and proven to be effective for various VL tasks such as visual-question answering.
1 code implementation • 8 Jun 2023 • Haode Zhang, Haowen Liang, LiMing Zhan, Xiao-Ming Wu, Albert Y. S. Lam
We consider the task of few-shot intent detection, which involves training a deep learning model to classify utterances based on their underlying intents using only a small amount of labeled data.
3 code implementations • 11 May 2023 • Qijiong Liu, Nuo Chen, Tetsuya Sakai, Xiao-Ming Wu
Personalized content-based recommender systems have become indispensable tools for users to navigate through the vast amount of content available on platforms like daily news websites and book recommendation services.
no code implementations • 9 Apr 2023 • Tiandeng Wu, Qijiong Liu, Yi Cao, Yao Huang, Xiao-Ming Wu, Jiandong Ding
Graph convolutional network (GCN) has been successfully applied to capture global non-consecutive and long-distance semantic information for text classification.
1 code implementation • 2 Apr 2023 • Qijiong Liu, Jieming Zhu, Jiahao Wu, Tiandeng Wu, Zhenhua Dong, Xiao-Ming Wu
Item list continuation is proposed to model the overall trend of a list and predict subsequent items.
no code implementations • 26 Mar 2023 • Han Liu, Feng Zhang, Xiaotong Zhang, Siyang Zhao, Fenglong Ma, Xiao-Ming Wu, Hongyang Chen, Hong Yu, Xianchao Zhang
Distribution estimation has been demonstrated as one of the most effective approaches in dealing with few-shot image classification, as the low-level patterns and underlying representations can be easily transferred across different tasks in computer vision domain.
Few-Shot Image Classification Few-Shot Text Classification +1
1 code implementation • 13 Mar 2023 • Cong Wang, Jinshan Pan, WanYu Lin, Jiangxin Dong, Xiao-Ming Wu
For this purpose, we develop a prompt based on the features of depth differences between the hazy input images and corresponding clear counterparts that can guide dehazing models for better restoration.
1 code implementation • 22 Feb 2023 • Guangyuan Shi, Qimai Li, Wenlong Zhang, Jiaxin Chen, Xiao-Ming Wu
Our experiments show that such a simple approach can greatly reduce the occurrence of conflicting gradients in the remaining shared layers and achieve better performance, with only a slight increase in model parameters in many cases.
no code implementations • 1 Feb 2023 • Yulin Zhu, Xing Ai, Qimai Li, Xiao-Ming Wu, Kai Zhou
Linearized Graph Neural Networks (GNNs) have attracted great attention in recent years for graph representation learning.
no code implementations • CVPR 2023 • Zuhao Liu, Xiao-Ming Wu, Dian Zheng, Kun-Yu Lin, Wei-Shi Zheng
There also exists a scene gap between virtual and real scenarios, including scene-specific anomalies (events that are abnormal in one scene but normal in another) and scene-specific attributes, such as the viewpoint of the surveillance camera.
Anomaly Detection In Surveillance Videos Video Anomaly Detection
no code implementations • 16 Jul 2022 • Cong Wang, Jinshan Pan, Xiao-Ming Wu
The generator is based on a U-shaped Transformer which is used to explore non-local information for better clear image restoration.
1 code implementation • 22 Jun 2022 • Jia-Run Du, Jia-Chang Feng, Kun-Yu Lin, Fa-Ting Hong, Xiao-Ming Wu, Zhongang Qi, Ying Shan, Wei-Shi Zheng
Accordingly, we first exclude these surely non-existent categories by a complementary learning loss.
1 code implementation • ACL 2022 • Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, Albert Y. S. Lam
Existing approaches typically rely on a large amount of labeled utterances and employ pseudo-labeling methods for representation learning and clustering, which are label-intensive, inefficient, and inaccurate.
1 code implementation • NAACL 2022 • Haode Zhang, Haowen Liang, Yuwei Zhang, LiMing Zhan, Xiao-Ming Wu, Xiaolei Lu, Albert Y. S. Lam
It is challenging to train a good intent classifier for a task-oriented dialogue system with only a few annotations.
no code implementations • 10 May 2022 • Wenlong Zhang, Guangyuan Shi, Yihao Liu, Chao Dong, Xiao-Ming Wu
The recently proposed practical degradation model includes a full spectrum of degradation types, but only considers complex cases that use all degradation types in the degradation process, while ignoring many important corner cases that are common in the real world.
no code implementations • 14 Feb 2022 • Cong Wang, Jinshan Pan, Xiao-Ming Wu
Most of the existing deep-learning-based methods constrain the network to generate derained images but few of them explore features from intermediate layers, different levels, and different modules which are beneficial for rain streaks removal.
1 code implementation • 12 Feb 2022 • Fan Lu, Qimai Li, Bo Liu, Xiao-Ming Wu, Xiaotong Zhang, Fuyu Lv, Guli Lin, Sen Li, Taiwei Jin, Keping Yang
Our approach can be seamlessly integrated with existing latent space based methods and be potentially applied in any product retrieval method that uses purchase history to model user preferences.
1 code implementation • NeurIPS 2021 • Guangyuan Shi, Jiaxin Chen, Wenlong Zhang, Li-Ming Zhan, Xiao-Ming Wu
Our study shows that existing methods severely suffer from catastrophic forgetting, a well-known problem in incremental learning, which is aggravated due to data scarcity and imbalance in the few-shot setting.
Ranked #7 on Few-Shot Class-Incremental Learning on mini-Imagenet
no code implementations • Findings (EMNLP) 2021 • Haode Zhang, Yuwei Zhang, Li-Ming Zhan, Jiaxin Chen, Guangyuan Shi, Xiao-Ming Wu, Albert Y. S. Lam
This paper investigates the effectiveness of pre-training for few-shot intent classification.
1 code implementation • 9 Sep 2021 • Xiao-Ming Wu, Xin Luo, Yu-Wei Zhan, Chen-Lu Ding, Zhen-Duo Chen, Xin-Shun Xu
With the vigorous development of multimedia equipment and applications, efficient retrieval of large-scale multi-modal data has become a trendy research topic.
1 code implementation • ICML Workshop AutoML 2021 • Jiaxin Chen, Li-Ming Zhan, Xiao-Ming Wu, Fu-Lai Chung
Many meta-learning algorithms can be formulated into an interleaved process, in the sense that task-specific predictors are learned during inner-task adaptation and meta-parameters are updated during meta-update.
no code implementations • 17 Jun 2021 • Sen Li, Fuyu Lv, Taiwei Jin, Guli Lin, Keping Yang, Xiaoyi Zeng, Xiao-Ming Wu, Qianli Ma
We evaluate MGDSPR on Taobao Product Search with significant metrics gains observed in offline experiments and online A/B tests.
no code implementations • ACL 2021 • Li-Ming Zhan, Haowen Liang, Bo Liu, Lu Fan, Xiao-Ming Wu, Albert Y. S. Lam
Since the distribution of outlier utterances is arbitrary and unknown in the training stage, existing methods commonly rely on strong assumptions on data distribution such as mixture of Gaussians to make inference, resulting in either complex multi-step training procedures or hand-crafted rules such as confidence threshold selection for outlier detection.
2 code implementations • 18 Feb 2021 • Bo Liu, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, Xiao-Ming Wu
We show that SLAKE can be used to facilitate the development and evaluation of Med-VQA systems.
1 code implementation • NeurIPS 2020 • Jiaxin Chen, Xiao-Ming Wu, Yanke Li, Qimai Li, Li-Ming Zhan, Fu-Lai Chung
The support/query (S/Q) episodic training strategy has been widely used in modern meta-learning algorithms and is believed to improve their generalization ability to test environments.
1 code implementation • ACL 2020 • Guangfeng Yan, Lu Fan, Qimai Li, Han Liu, Xiaotong Zhang, Xiao-Ming Wu, Albert Y. S. Lam
User intent classification plays a vital role in dialogue systems.
1 code implementation • 20 May 2020 • Menghan Wang, Yujie Lin, Guli Lin, Keping Yang, Xiao-Ming Wu
Most existing methods can be categorized as \emph{multi-view representation fusion}; they first build one graph and then integrate multi-view data into a single compact representation for each node in the graph.
no code implementations • 3 Mar 2020 • Zhen Peng, Yixiang Dong, Minnan Luo, Xiao-Ming Wu, Qinghua Zheng
To take full advantage of fast-growing unlabeled networked data, this paper introduces a novel self-supervised strategy for graph representation learning by exploiting natural supervision provided by the data itself.
1 code implementation • 26 Dec 2019 • Jiaxin Chen, Li-Ming Zhan, Xiao-Ming Wu, Fu-Lai Chung
In this paper, we recast metric-based meta-learning from a Bayesian perspective and develop a variational metric scaling framework for learning a proper metric scaling parameter.
1 code implementation • IJCNLP 2019 • Han Liu, Xiaotong Zhang, Lu Fan, Xu Fu, i, Qimai Li, Xiao-Ming Wu, Albert Y. S. Lam
With the burgeoning of conversational AI, existing systems are not capable of handling numerous fast-emerging intents, which motivates zero-shot intent classification.
no code implementations • 27 Sep 2019 • Han Liu, Xianchao Zhang, Xiaotong Zhang, Qimai Li, Xiao-Ming Wu
However, there are two issues in existing possible world based algorithms: (1) They rely on all the possible worlds and treat them equally, but some marginal possible worlds may cause negative effects.
1 code implementation • 26 Sep 2019 • Qimai Li, Xiaotong Zhang, Han Liu, Quanyu Dai, Xiao-Ming Wu
Graph convolutional neural networks (GCN) have been the model of choice for graph representation learning, which is mainly due to the effective design of graph convolution that computes the representation of a node by aggregating those of its neighbors.
no code implementations • 25 Sep 2019 • Qimai Li, Xiaotong Zhang, Han Liu, Xiao-Ming Wu
Graph convolutional neural networks have demonstrated promising performance in attributed graph learning, thanks to the use of graph convolution that effectively combines graph structures and node features for learning node representations.
1 code implementation • 4 Sep 2019 • Quanyu Dai, Xiao-Ming Wu, Jiaren Xiao, Xiao Shen, Dan Wang
Existing methods for single network learning cannot solve this problem due to the domain shift across networks.
1 code implementation • 4 Jun 2019 • Xiaotong Zhang, Han Liu, Qimai Li, Xiao-Ming Wu
Attributed graph clustering is challenging as it requires joint modelling of graph structures and node attributes.
Ranked #3 on Graph Clustering on Cora
1 code implementation • CVPR 2019 • Qimai Li, Xiao-Ming Wu, Han Liu, Xiaotong Zhang, Zhichao Guan
However, existing graph-based methods either are limited in their ability to jointly model graph structures and data features, such as the classical label propagation methods, or require a considerable amount of labeled data for training and validation due to high model complexity, such as the recent neural-network-based methods.
no code implementations • 8 Jul 2018 • Yong Wang, Xiao-Ming Wu, Qimai Li, Jiatao Gu, Wangmeng Xiang, Lei Zhang, Victor O. K. Li
The key issue of few-shot learning is learning to generalize.
1 code implementation • 22 Jan 2018 • Qimai Li, Zhichao Han, Xiao-Ming Wu
Many interesting problems in machine learning are being revisited with new deep learning tools.
Ranked #3 on Node Classification on Facebook
no code implementations • CVPR 2015 • Xiao-Ming Wu, Zhenguo Li, Shih-Fu Chang
Graph-based computer vision applications rely critically on similarity metrics which compute the pairwise similarity between any pair of vertices on graphs.
no code implementations • CVPR 2014 • Go Irie, Zhenguo Li, Xiao-Ming Wu, Shih-Fu Chang
Previous efforts in hashing intend to preserve data variance or pairwise affinity, but neither is adequate in capturing the manifold structures hidden in most visual data.
no code implementations • NeurIPS 2013 • Xiao-Ming Wu, Zhenguo Li, Shih-Fu Chang
We show that either explicitly or implicitly, various well-known graph-based models exhibit a common significant \emph{harmonic} structure in its target function -- the value of a vertex is approximately the weighted average of the values of its adjacent neighbors.
no code implementations • NeurIPS 2012 • Xiao-Ming Wu, Zhenguo Li, Anthony M. So, John Wright, Shih-Fu Chang
We prove that under proper absorption rates, a random walk starting from a set $\mathcal{S}$ of low conductance will be mostly absorbed in $\mathcal{S}$.
no code implementations • NeurIPS 2009 • Xiao-Ming Wu, Anthony M. So, Zhenguo Li, Shuo-Yen R. Li
In this paper, we show that a large class of kernel learning problems can be reformulated as semidefinite-quadratic-linear programs (SQLPs), which only contain a simple positive semidefinite constraint, a second-order cone constraint and a number of linear constraints.