no code implementations • 20 Apr 2024 • Yixuan Li, Xuelin Liu, Xiaoyang Wang, Shiqi Wang, Weisi Lin
Therefore, we propose FakeBench, the first-of-a-kind benchmark towards transparent defake, consisting of fake images with human language descriptions on forgery signs.
no code implementations • 19 Apr 2024 • Chia-Hsuan Chang, Xiaoyang Wang, Christopher C. Yang
By focusing on the predictive modeling of sepsis-related mortality, we propose a method that learns a performance-optimized predictive model and then employs the transfer learning process to produce a model with better fairness.
1 code implementation • 18 Apr 2024 • Fan Li, Xiaoyang Wang, Dawei Cheng, Wenjie Zhang, Ying Zhang, Xuemin Lin
Self-supervised learning (SSL) provides a promising alternative for representation learning on hypergraphs without costly labels.
no code implementations • 4 Apr 2024 • Mary M. Lucas, Xiaoyang Wang, Chia-Hsuan Chang, Christopher C. Yang, Jacqueline E. Braughton, Quyen M. Ngo
Fairness of machine learning models in healthcare has drawn increasing attention from clinicians, researchers, and even at the highest level of government.
1 code implementation • 2 Apr 2024 • Yuanyuan Lei, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Ruihong Huang, Dong Yu
To address this issue and make the summarizer express both sides of opinions, we introduce the concept of polarity calibration, which aims to align the polarity of output summary with that of input text.
no code implementations • 11 Mar 2024 • Xiaoyang Wang, Huihui Bai, Limin Yu, Yao Zhao, Jimin Xiao
Inspired by the low-density separation assumption in semi-supervised learning, our key insight is that feature density can shed a light on the most promising direction for the segmentation classifier to explore, which is the regions with lower density.
no code implementations • 6 Mar 2024 • Yizheng Gong, Siyue Yu, Xiaoyang Wang, Jimin Xiao
Based on these findings, we propose CoMasTRe by disentangling continual segmentation into two stages: forgetting-resistant continual objectness learning and well-researched continual classification.
no code implementations • 6 Mar 2024 • Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, Dong Yu, Fei Liu
Our analytical reasoning embodies the tasks of letting large language models count how many points each team scores in a quarter in the NBA and NFL games.
no code implementations • 15 Feb 2024 • Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, Dong Yu, Fei Liu
In this paper, we introduce four novel tasks centered around sports data analytics to evaluate the numerical reasoning and information fusion capabilities of LLMs.
1 code implementation • 31 Jan 2024 • Hongpeng Guo, Haotian Gu, Xiaoyang Wang, Bo Chen, Eun Kyung Lee, Tamar Eilam, Deming Chen, Klara Nahrstedt
Federated learning (FL) is a machine learning paradigm that allows multiple clients to collaboratively train a shared model while keeping their data on-premise.
no code implementations • 31 Jan 2024 • Sangwoo Cho, Kaiqiang Song, Chao Zhao, Xiaoyang Wang, Dong Yu
Multi-turn dialogues are characterized by their extended length and the presence of turn-taking conversations.
1 code implementation • 22 Jan 2024 • Xinqiao Zhao, Feilong Tang, Xiaoyang Wang, Jimin Xiao
Specifically, we leverage the class prototypes that carry positive shared features and propose a Multi-Scaled Distribution-Weighted (MSDW) consistency loss for narrowing the gap between the CAMs generated through classifier weights and class prototypes during training.
1 code implementation • 7 Jan 2024 • Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng Wu, Fei Liu, PengFei Liu, Dong Yu
This paper introduces the Decomposed Requirements Following Ratio (DRFR), a new metric for evaluating Large Language Models' (LLMs) ability to follow instructions.
no code implementations • 14 Dec 2023 • Kaiqiang Song, Xiaoyang Wang, Sangwoo Cho, Xiaoman Pan, Dong Yu
This paper introduces a novel approach to enhance the capabilities of Large Language Models (LLMs) in processing and understanding extensive text sequences, a critical aspect in applications requiring deep comprehension and synthesis of large volumes of information.
6 code implementations • 15 Nov 2023 • Fuxiao Liu, Xiaoyang Wang, Wenlin Yao, Jianshu Chen, Kaiqiang Song, Sangwoo Cho, Yaser Yacoob, Dong Yu
Recognizing the need for a comprehensive evaluation of LMM chart understanding, we also propose a MultiModal Chart Benchmark (\textbf{MMC-Benchmark}), a comprehensive human-annotated benchmark with nine distinct tasks evaluating reasoning capabilities over charts.
1 code implementation • 30 Sep 2023 • Xuansheng Wu, Wenlin Yao, Jianshu Chen, Xiaoman Pan, Xiaoyang Wang, Ninghao Liu, Dong Yu
In this work, we investigate how the instruction tuning adjusts pre-trained models with a focus on intrinsic changes.
no code implementations • 8 Sep 2023 • Haopeng Zhang, Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Hongwei Wang, Jiawei Zhang, Dong Yu
SRI balances the importance and diversity of a subset of sentences from the source documents and can be calculated in unsupervised and adaptive manners.
no code implementations • 1 Aug 2023 • Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, Jianshu Chen
Compositional generalization empowers the LLMs to solve problems that are harder than the ones they have seen (i. e., easy-to-hard generalization), which is a critical reasoning capability of human-like intelligence.
Ranked #18 on Math Word Problem Solving on MATH
1 code implementation • 8 Jun 2023 • Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Wenxuan Wu, Chulin Xie, Yuhang Yao, Kai Zhang, Qifan Zhang, Yuhui Zhang, Carlee Joe-Wong, Salman Avestimehr, Chaoyang He
This paper introduces FedSecurity, an end-to-end benchmark designed to simulate adversarial attacks and corresponding defense mechanisms in Federated Learning (FL).
no code implementations • 2 Jun 2023 • Canjia Li, Xiaoyang Wang, Dongdong Li, Yiding Liu, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Simiu Gu, Dawei Yin
In this work, we focus on ranking user satisfaction rather than relevance in web search, and propose a PLM-based framework, namely SAT-Ranker, which comprehensively models different dimensions of user satisfaction in a unified manner.
no code implementations • 24 May 2023 • Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, Fei Liu
Human preference judgments are pivotal in guiding large language models (LLMs) to produce outputs that align with human values.
no code implementations • 24 Feb 2023 • Mohammud J. Bocus, Xiaoyang Wang, Robert. J. Piechocki
This paper presents a novel approach for multimodal data fusion based on the Vector-Quantized Variational Autoencoder (VQVAE) architecture.
no code implementations • ICCV 2023 • Zheng Fang, Xiaoyang Wang, Haocheng Li, Jiejie Liu, Qiugui Hu, Jimin Xiao
In this paper, we propose a few-shot anomaly detection strategy that works in a low-data regime and can generalize across products at no cost.
1 code implementation • CVPR 2023 • Xiaoyang Wang, Bingfeng Zhang, Limin Yu, Jimin Xiao
Inspired by density-based unsupervised clustering, we propose to leverage feature density to locate sparse regions within feature clusters defined by label and pseudo labels.
1 code implementation • 19 Dec 2022 • Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan, Linda Petzold, Dong Yu
Specifically, zero/few-shot and fine-tuning results show that the model pre-trained on our corpus demonstrates a strong aspect or query-focused generation ability compared with the backbone model.
1 code implementation • 28 Oct 2022 • Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Fei Liu, Dong Yu
The problem is only exacerbated by a lack of segmentation in transcripts of audio/video recordings.
Ranked #5 on Text Summarization on Pubmed
1 code implementation • 22 Oct 2022 • Fei Wang, Kaiqiang Song, Hongming Zhang, Lifeng Jin, Sangwoo Cho, Wenlin Yao, Xiaoyang Wang, Muhao Chen, Dong Yu
Recent literature adds extractive summaries as guidance for abstractive summarization models to provide hints of salient content and achieves better performance.
Ranked #7 on Abstractive Text Summarization on CNN / Daily Mail
1 code implementation • 21 Oct 2022 • Yue Yang, Wenlin Yao, Hongming Zhang, Xiaoyang Wang, Dong Yu, Jianshu Chen
Large-scale pretrained language models have made significant advances in solving downstream language understanding tasks.
Ranked #2 on Visual Commonsense Tests on ViComTe-color
no code implementations • 4 Oct 2022 • Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople
Federated learning enables training high-utility models across several clients without directly sharing their private data.
no code implementations • 13 Sep 2022 • Hakan Erdol, Xiaoyang Wang, Peizheng Li, Jonathan D. Thomas, Robert Piechocki, George Oikonomou, Rui Inacio, Abdelrahim Ahmad, Keith Briggs, Shipra Kapoor
In order to provide such services, 5G systems will support various combinations of access technologies such as LTE, NR, NR-U and Wi-Fi.
no code implementations • 31 Aug 2022 • Peizheng Li, Hakan Erdol, Keith Briggs, Xiaoyang Wang, Robert Piechocki, Abdelrahim Ahmad, Rui Inacio, Shipra Kapoor, Angela Doufexi, Arjun Parekh
The model will also be used as the base model for adaptive training in the new environment.
no code implementations • 3 Aug 2022 • Robert J. Piechocki, Xiaoyang Wang, Mohammud J. Bocus
In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks.
no code implementations • 27 Jun 2022 • Peizheng Li, Xiaoyang Wang, Robert Piechocki, Shipra Kapoor, Angela Doufexi, Arjun Parekh
Measuring customer experience on mobile data is of utmost importance for global mobile operators.
no code implementations • 8 Jun 2022 • Peizheng Li, Jonathan Thomas, Xiaoyang Wang, Hakan Erdol, Abdelrahim Ahmad, Rui Inacio, Shipra Kapoor, Arjun Parekh, Angela Doufexi, Arman Shojaeifard, Robert Piechocki
One of the main reasons is the modelling gap between the simulation and the real environment, which could make the RL agent trained by simulation ill-equipped for the real environment.
1 code implementation • ACL 2022 • Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, Fei Liu
Summarization of podcast transcripts is of practical benefit to both content providers and consumers.
no code implementations • 12 Nov 2021 • Peizheng Li, Jonathan Thomas, Xiaoyang Wang, Ahmed Khalil, Abdelrahim Ahmad, Rui Inacio, Shipra Kapoor, Arjun Parekh, Angela Doufexi, Arman Shojaeifard, Robert Piechocki
We provide a taxonomy for the challenges faced by ML/RL models throughout the development life-cycle: from the system specification to production deployment (data acquisition, model design, testing and management, etc.).
no code implementations • 1 Nov 2021 • Zhe Zhou, Cong Li, Xuechao Wei, Xiaoyang Wang, Guangyu Sun
However, to realize efficient GNN training is challenging, especially on large graphs.
no code implementations • 29 Sep 2021 • Xiaoyang Wang, Han Zhao, Klara Nahrstedt, Oluwasanmi O Koyejo
To this end, we propose a strategy to mitigate the effect of spurious features based on our observation that the global model in the federated learning step has a low accuracy disparity due to statistical heterogeneity.
no code implementations • 8 Sep 2021 • Dan Su, Jiqiang Liu, Sencun Zhu, Xiaoyang Wang, Wei Wang, Xiangliang Zhang
In this work, we propose AppQ, a novel app quality grading and recommendation system that extracts inborn features of apps based on app source code.
1 code implementation • 24 Jul 2021 • Zhenguang Liu, Peng Qian, Xiaoyang Wang, Yuan Zhuang, Lin Qiu, Xun Wang
Then, we propose a novel temporal message propagation network to extract the graph feature from the normalized graph, and combine the graph feature with designed expert patterns to yield a final detection system.
no code implementations • 3 Mar 2021 • Xiaoyang Wang, Chen Li, Jianqiao Zhao, Dong Yu
To facilitate the research on this corpus, we provide results of several benchmark models.
no code implementations • 3 Mar 2021 • Xiaoyang Wang, Jonathan D Thomas, Robert J Piechocki, Shipra Kapoor, Raul Santos-Rodriguez, Arjun Parekh
Open Radio Access Network (ORAN) is being developed with an aim to democratise access and lower the cost of future mobile data networks, supporting network services with various QoS requirements, such as massive IoT and URLLC.
no code implementations • 15 Jan 2021 • Xiaoyang Wang, Bo Li, Yibo Zhang, Bhavya Kailkhura, Klara Nahrstedt
However, these AutoML pipelines only focus on improving the learning accuracy of benign samples while ignoring the ML model robustness under adversarial attacks.
1 code implementation • 1st Conference on Causal Learning and Reasoning 2022 • Xiaoyang Wang, Klara Nahrstedt, Oluwasanmi O Koyejo
Current approaches for learning disentangled representations assume that independent latent variables generate the data through a single data generation process.
no code implementations • 9 Nov 2020 • Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, Fei Liu
Instead, we investigate several less-studied aspects of neural abstractive summarization, including (i) the importance of selecting important segments from transcripts to serve as input to the summarizer; (ii) striking a balance between the amount and quality of training instances; (iii) the appropriate summary length and start/end points.
1 code implementation • TKDE 2020 • Dawei Cheng, Xiaoyang Wang, Ying Zhang, Liqing Zhang
But manually generating features needs domain knowledge and may lay behind the modus operandi of fraud, which means we need to automatically focus on the most relevant fraudulent behavior patterns in the online detection system.
5 code implementations • 27 Jul 2020 • Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Xinghua Zhu, Jianzong Wang, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram, Salman Avestimehr
Federated learning (FL) is a rapidly growing research field in machine learning.
no code implementations • ACL 2020 • Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, Changyou Chen
Text generation from a knowledge base aims to translate knowledge triples to natural language descriptions.
no code implementations • 4 Oct 2019 • Hong Jiang, Jong-Hoon Ahn, Xiaoyang Wang
We will develop a theoretical framework to characterize the signals that can be robustly recovered from their observations by an ML algorithm, and establish a Lipschitz condition on signals and observations that is both necessary and sufficient for the existence of a robust recovery.
no code implementations • 1 Jul 2019 • Xiaoyang Wang, Ioannis Mavromatis, Andrea Tassi, Raul Santos-Rodriguez, Robert J. Piechocki
Future Connected and Automated Vehicles (CAV), and more generally ITS, will form a highly interconnected system.
no code implementations • 9 Nov 2018 • Yang Fu, Xiaoyang Wang, Yunchao Wei, Thomas Huang
Thus, a more robust clip-level feature representation can be generated according to a weighted sum operation guided by the mined 2-D attention score matrix.
Large-Scale Person Re-Identification Video-Based Person Re-Identification
no code implementations • 25 Jun 2018 • Shujian Yu, Xiaoyang Wang, Jose C. Principe
In this paper, a novel Hierarchical Hypothesis Testing framework with Request-and-Reverify strategy is developed to detect concept drifts by requesting labels only when necessary.
no code implementations • CVPR 2015 • Xiaoyang Wang, Qiang Ji
Video event recognition still faces great challenges due to large intra-class variation and low image resolution, in particular for surveillance videos.
no code implementations • The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014 2014 • Xiaoyang Wang, Qiang Ji
These three levels of context provide crucial bottom-up, middle level, and top down information that can benefit the recognition task itself.
Ranked #1 on Action Recognition on VIRAT Ground 2.0