no code implementations • CCL 2021 • Hao Wang, Junhui Li, ZhengXian Gong
“在汉语等其他有省略代词习惯的语言中, 通常会删掉可从上下文信息推断出的代词。尽管以Transformer为代表的的神经机器翻译模型取得了巨大的成功, 但这种省略现象依旧对神经机器翻译模型造成了很大的挑战。本文在Transformer基础上提出了一个融合零指代识别的翻译模型, 并引入篇章上下文来丰富指代信息。具体地, 该模型采用联合学习的框架, 在翻译模型基础上, 联合了一个分类任务, 即判别句子中省略代词在句子所表示的成分, 使得模型能够融合零指代信息辅助翻译。通过在中英对话数据集上的实验, 验证了本文提出方法的有效性, 与基准模型相比, 翻译性能提升了1. 48个BLEU值。”
no code implementations • NLP4ConvAI (ACL) 2022 • Tong Zhang, Yong liu, Boyang Li, Peixiang Zhong, Chen Zhang, Hao Wang, Chunyan Miao
Conversational Recommendation Systems recommend items through language based interactions with users. In order to generate naturalistic conversations and effectively utilize knowledge graphs (KGs) containing background information, we propose a novel Bag-of-Entities loss, which encourages the generated utterances to mention concepts related to the item being recommended, such as the genre or director of a movie.
1 code implementation • ECCV 2020 • Xiangyu Zhu, Fan Yang, Di Huang, Chang Yu, Hao Wang, Jianzhu Guo, Zhen Lei, Stan Z. Li
However, most of their training data is constructed by 3D Morphable Model, whose space spanned is only a small part of the shape space.
no code implementations • 5 Jun 2024 • Tingjia Shen, Hao Wang, Jiaqing Zhang, Sirui Zhao, Liangyue Li, Zulong Chen, Defu Lian, Enhong Chen
To this end, we propose a novel framework named URLLM, which aims to improve the CDSR performance by exploring the User Retrieval approach and domain grounding on LLM simultaneously.
1 code implementation • 5 Jun 2024 • Mingyuan Li, Tong Jia, Hui Lu, Bowen Ma, Hao Wang, Dongyue Chen
Prohibited Item detection in X-ray images is one of the most effective security inspection methods. However, differing from natural light images, the unique overlapping phenomena in X-ray images lead to the coupling of foreground and background features, thereby lowering the accuracy of general object detectors. Therefore, we propose a Multi-Class Min-Margin Contrastive Learning (MMCL) method that, by clarifying the category semantic information of content queries under the deformable DETR architecture, aids the model in extracting specific category foreground information from coupled features. Specifically, after grouping content queries by the number of categories, we employ the Multi-Class Inter-Class Exclusion (MIE) loss to push apart content queries from different groups.
no code implementations • 4 Jun 2024 • Kun Zhou, Shengkui Zhao, Yukun Ma, Chong Zhang, Hao Wang, Dianwen Ng, Chongjia Ni, Nguyen Trung Hieu, Jia Qi Yip, Bin Ma
Recent language model-based text-to-speech (TTS) frameworks demonstrate scalability and in-context learning capabilities.
no code implementations • 31 May 2024 • Xinxi Zhang, Song Wen, Ligong Han, Felix Juefei-Xu, Akash Srivastava, Junzhou Huang, Hao Wang, Molei Tao, Dimitris N. Metaxas
We introduce Spectral Orthogonal Decomposition Adaptation (SODA), which balances computational efficiency and representation capacity.
no code implementations • 30 May 2024 • Songning Lai, Ninghui Feng, Haochen Sui, Ze Ma, Hao Wang, Zichen Song, Hang Zhao, Yutao Yue
The field of time series forecasting has garnered significant attention in recent years, prompting the development of advanced models like TimeSieve, which demonstrates impressive performance.
no code implementations • 30 May 2024 • Xiaoyu Wu, Jiaru Zhang, Yang Hua, Bohan Lyu, Hao Wang, Tao Song, Haibing Guan
Through this modeling, we identify the primary cause of this corruption stage: a narrowed learning distribution inherent in the nature of few-shot fine-tuning.
no code implementations • 29 May 2024 • Kaveh Alimohammadi, Hao Wang, Ojas Gulati, Akash Srivastava, Navid Azizan
Existing differentially private (DP) synthetic data generation mechanisms typically assume a single-source table.
no code implementations • 28 May 2024 • Youlong Ding, Xueyang Wu, Yining Meng, Yonggang Luo, Hao Wang, Weike Pan
Deep learning with differential privacy (DP) has garnered significant attention over the past years, leading to the development of numerous methods aimed at enhancing model accuracy and training efficiency.
no code implementations • 28 May 2024 • Mingjia Yin, Hao Wang, Wei Guo, Yong liu, Suojuan Zhang, Sirui Zhao, Defu Lian, Enhong Chen
The sequential recommender (SR) system is a crucial component of modern recommender systems, as it aims to capture the evolving preferences of users.
1 code implementation • 27 May 2024 • Tianshu Wang, Hongyu Lin, Xiaoyang Chen, Xianpei Han, Hao Wang, Zhenyu Zeng, Le Sun
Based on our findings, we further design a compositional entity matching (ComEM) framework that leverages the composition of multiple strategies and LLMs.
no code implementations • 26 May 2024 • Hao Wang, Jianwei Li, Zhengyu Li
In conclusion, the AI-generated text detection model based on the BERT algorithm proposed in this study shows high accuracy and stability in experiments, providing an effective solution for related fields.
1 code implementation • 26 May 2024 • Xijie Huang, Xinyuan Wang, Hantao Zhang, Jiawen Xi, Jingkun An, Hao Wang, Chengwei Pan
Security concerns related to Large Language Models (LLMs) have been extensively explored, yet the safety implications for Multimodal Large Language Models (MLLMs), particularly in medical contexts (MedMLLMs), remain insufficiently studied.
1 code implementation • 23 May 2024 • Zhuowei Li, Zihao Xu, Ligong Han, Yunhe Gao, Song Wen, Di Liu, Hao Wang, Dimitris N. Metaxas
In-context Learning (ICL) empowers large language models (LLMs) to adapt to unseen tasks during inference by prefixing a few demonstration examples prior to test queries.
no code implementations • 21 May 2024 • Mingjia Yin, Hao Wang, Wei Guo, Yong liu, Zhi Li, Sirui Zhao, Defu Lian, Enhong Chen
Cross-domain sequential recommendation (CDSR) aims to uncover and transfer users' sequential preferences across multiple recommendation domains.
no code implementations • 14 May 2024 • Hao Wang, Nao Li
To verify that the FiiNet model can dynamically learn the importance of feature interaction combinations in a fine-grained manner and improve the model's recommendation performance and interpretability, this paper compares it with many click-through rate prediction models on two real datasets, proving that the FiiNet model incorporating the selective kernel network can effectively improve the recommendation effect and provide better interpretability.
no code implementations • 9 May 2024 • Hao Wang, Angel E. Rodriguez-Fernandez, Lourdes Uribe, André Deutz, Oziel Cortés-Piña, Oliver Schütze
In this work, we propose a set-based Newton method for Hausdorff approximations of the Pareto front to be used within multi-objective evolutionary algorithms.
no code implementations • 7 May 2024 • Hao Wu, Ruochong LI, Hao Wang, Hui Xiong
To address this issue, we propose COM3D, making the first attempt to exploit the cross-view correspondence and cross-modal mining to enhance the retrieval performance.
1 code implementation • 6 May 2024 • Xiwen Chen, Peijie Qiu, Wenhui Zhu, Huayu Li, Hao Wang, Aristeidis Sotiras, Yalin Wang, Abolfazl Razi
Deep neural networks, including transformers and convolutional neural networks, have significantly improved multivariate time series classification (MTSC).
no code implementations • 2 May 2024 • Hao Wang, Tetsuro Morimura, Ukyo Honda, Daisuke Kawahara
Non-autoregressive (NAR) language models are known for their low latency in neural machine translation (NMT).
1 code implementation • 1 May 2024 • Yucheng Shi, Alexandros Agapitos, David Lynch, Giorgio Cruciata, Cengis Hasan, Hao Wang, Yayu Yao, Aleksandar Milenovic
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours that trade-off between multiple, possibly conflicting, objectives.
no code implementations • 30 Apr 2024 • Cengis Hasan, Alexandros Agapitos, David Lynch, Alberto Castagna, Giorgio Cruciata, Hao Wang, Aleksandar Milenovic
We present a method that addresses the pain point of long lead-time required to deploy cell-level parameter optimisation policies to new wireless network sites.
1 code implementation • 29 Apr 2024 • Chuni Liu, Boyuan Ma, Xiaojuan Ban, Yujie Xie, Hao Wang, Weihua Xue, Jingchao Ma, Ke Xu
Topological consistency plays a crucial role in the task of boundary segmentation for reticular images, such as cell membrane segmentation in neuron electron microscopic images, grain boundary segmentation in material microscopic images and road segmentation in aerial images.
no code implementations • 27 Apr 2024 • Chenghao Huang, Xiaolu Chen, Yanru Zhang, Hao Wang
FedCRL introduces contrastive representation learning (CRL) on shared representations to facilitate knowledge acquisition of clients.
1 code implementation • 25 Apr 2024 • Hao Wang, Jiayou Qin, Xiwen Chen, Ashish Bastola, John Suchanek, Zihao Gong, Abolfazl Razi
Furthermore, in the experiments part, we show the qualitative analysis of motor focus estimation between the conventional dense optical flow-based method and the proposed method.
2 code implementations • 25 Apr 2024 • Haizhou Shi, Zihao Xu, Hengyi Wang, Weiyi Qin, Wenyuan Wang, Yibin Wang, Hao Wang
In this survey, we provide a comprehensive overview of the current research progress on LLMs within the context of CL.
1 code implementation • 24 Apr 2024 • Zhuoqun Li, Hongyu Lin, Tianshu Wang, Boxi Cao, Yaojie Lu, Weixiang Zhou, Hao Wang, Zhenyu Zeng, Le Sun, Xianpei Han
Linking a claim to grounded references is a critical ability to fulfill human demands for authentic and reliable information.
no code implementations • 22 Apr 2024 • Hao Wang, Qingshan Xu, Hongyuan Chen, Rui Ma
In this work, we introduce PGAHum, a prior-guided geometry and appearance learning framework for high-fidelity animatable human reconstruction.
no code implementations • 13 Apr 2024 • Shan Gao, Amit K. Chakraborty, Russell Greiner, Mark A. Lewis, Hao Wang
In summary, we showed that there are statistical features that distinguish outbreak and non-outbreak sequences long before outbreaks occur.
1 code implementation • 7 Apr 2024 • Hao Wang, Yanping Chen, Weizhe Yang, Yongbin Qin, Ruizhang Huang
The results indicate that two-dimensional feature engineering can take advantage of a two-dimensional sentence representation and make full use of prior knowledge in traditional feature engineering.
no code implementations • 6 Apr 2024 • Siyuan Tian, Hao Wang, Yiren Rong, Junhao Wang, Renjie Dai, Zhengxiao He
Modern displays nowadays possess the capability to render video content with a high dynamic range (HDR) and an extensive color gamut . However, the majority of available resources are still in standard dynamic range (SDR).
no code implementations • 30 Mar 2024 • Luankang Zhang, Hao Wang, Suojuan Zhang, Mingjia Yin, Yongqiang Han, Jiaqing Zhang, Defu Lian, Enhong Chen
To this end, we propose a Unified Framework for Adaptive Representation Enhancement and Inversed Learning in Cross-Domain Recommendation (AREIL).
1 code implementation • 26 Mar 2024 • Jinyi Li, Yihuai Lan, Lei Wang, Hao Wang
Prompt compression is an innovative method for efficiently condensing input prompts while preserving essential information.
no code implementations • 26 Mar 2024 • Yongqiang Han, Hao Wang, Kefan Wang, Likang Wu, Zhi Li, Wei Guo, Yong liu, Defu Lian, Enhong Chen
In recommendation systems, users frequently engage in multiple types of behaviors, such as clicking, adding to a cart, and purchasing.
1 code implementation • 25 Mar 2024 • Xiaoxuan Yu, Hao Wang, Weiming Li, Qiang Wang, SoonYong Cho, Younghun Sung
In this work, we propose a novel Disentangled Object-Centric TRansformer (DOCTR) that explores object-centric representation to facilitate learning with multiple objects for the multiple sub-tasks in a unified manner.
1 code implementation • 24 Mar 2024 • Amit K. Chakraborty, Shan Gao, Reza Miry, Pouria Ramazi, Russell Greiner, Mark A. Lewis, Hao Wang
The timely detection of disease outbreaks through reliable early warning signals (EWSs) is indispensable for effective public health mitigation strategies.
no code implementations • 23 Mar 2024 • Hao Wang, Tang Li, Chenhui Chu, Nengjun Zhu, Rui Wang, Pinpin Zhu
This approach aims to generate relation representations that are more aware of the spatial context and unseen relation in a manner similar to human perception.
no code implementations • 20 Mar 2024 • Canchen Jiang, Hao Wang
Community battery systems have been widely deployed to provide services to the grid.
no code implementations • 20 Mar 2024 • Jiarong Fan, Ariel Liebman, Hao Wang
The increasing integration of electric vehicles (EVs) into the grid can pose a significant risk to the distribution system operation in the absence of coordination.
no code implementations • 20 Mar 2024 • Fucai Ke, Hao Wang
To address this research gap, inspired by the concept of non-intrusive load monitoring (NILM), we develop a home charging prediction method using historical smart meter data.
1 code implementation • 19 Mar 2024 • Hao Wang, Jiayou Qin, Ashish Bastola, Xiwen Chen, John Suchanek, Zihao Gong, Abolfazl Razi
This paper explores the potential of Large Language Models(LLMs) in zero-shot anomaly detection for safe visual navigation.
no code implementations • 17 Mar 2024 • Xiaoyu Wu, Yang Hua, Chumeng Liang, Jiaru Zhang, Hao Wang, Tao Song, Haibing Guan
In response, we present Contrasting Gradient Inversion for Diffusion Models (CGI-DM), a novel method featuring vivid visual representations for digital copyright authentication.
2 code implementations • 12 Mar 2024 • Zhicheng Guo, Sijie Cheng, Hao Wang, Shihao Liang, Yujia Qin, Peng Li, Zhiyuan Liu, Maosong Sun, Yang Liu
The virtual API server contains a caching system and API simulators which are complementary to alleviate the change in API status.
2 code implementations • 12 Mar 2024 • Abdul Fatir Ansari, Lorenzo Stella, Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Sundar Rangapuram, Sebastian Pineda Arango, Shubham Kapoor, Jasper Zschiegner, Danielle C. Maddix, Hao Wang, Michael W. Mahoney, Kari Torkkola, Andrew Gordon Wilson, Michael Bohlke-Schneider, Yuyang Wang
We introduce Chronos, a simple yet effective framework for pretrained probabilistic time series models.
no code implementations • 8 Mar 2024 • Jiajie Fan, Amal Trigui, Thomas Bäck, Hao Wang
As such, FID might not be suitable to assess the performance of DGMs for a generative design task.
no code implementations • 8 Mar 2024 • Yunhao Li, Hao Wang, Xue Ma, Jiali Yao, Shaohua Dong, Heng Fan, Libo Zhang
Current multi-object tracking (MOT) aims to predict trajectories of targets (i. e.,"where") in videos.
1 code implementation • 7 Mar 2024 • Mingyuan Li, Tong Jia, Hao Wang, Bowen Ma, Shuyang Lin, Da Cai, Dongyue Chen
Considering the significant overlapping phenomenon in X-ray prohibited item images, we propose an Anti-Overlapping DETR (AO-DETR) based on one of the state-of-the-art general object detectors, DINO.
1 code implementation • 6 Mar 2024 • Wenfeng Feng, Chuzhan Hao, Yuewei Zhang, Yu Han, Hao Wang
These LoRA modules can be aligned with the expert design principles observed in Mixture-of-Experts (MoE).
no code implementations • 6 Mar 2024 • Cheng-Yen Yang, Hsiang-Wei Huang, Zhongyu Jiang, Hao Wang, Farron Wallace, Jenq-Neng Hwang
Dense object counting or crowd counting has come a long way thanks to the recent development in the vision community.
no code implementations • 6 Mar 2024 • Hao Wang, Sayed Pedram Haeri Boroujeni, Xiwen Chen, Ashish Bastola, Huayu Li, Abolfazl Razi
Thus, our proposed framework can generate a massive dataset of that images are high-quality and ground truth-paired, which well addresses the needs of the annotated datasets in specific tasks.
no code implementations • 29 Feb 2024 • Ji Ma, Hongming Dai, Yao Mu, Pengying Wu, Hao Wang, Xiaowei Chi, Yang Fei, Shanghang Zhang, Chang Liu
Zero-Shot Object Navigation (ZSON) requires agents to autonomously locate and approach unseen objects in unfamiliar environments and has emerged as a particularly challenging task within the domain of Embodied AI.
no code implementations • 29 Feb 2024 • Jinhao Li, Changlong Wang, Yanru Zhang, Hao Wang
To bridge this gap, we develop a novel BESS joint bidding strategy that utilizes deep reinforcement learning (DRL) to bid in the spot and contingency frequency control ancillary services (FCAS) markets.
1 code implementation • 28 Feb 2024 • Lei Wang, Wanyu Xu, Zhiqiang Hu, Yihuai Lan, Shan Dong, Hao Wang, Roy Ka-Wei Lee, Ee-Peng Lim
This paper introduces a new in-context learning (ICL) mechanism called In-Image Learning (I$^2$L) that combines demonstration examples, visual cues, and chain-of-thought reasoning into an aggregated image to enhance the capabilities of Large Multimodal Models (e. g., GPT-4V) in multimodal reasoning tasks.
1 code implementation • 26 Feb 2024 • Hao Wang, Zeyu Gao, Chao Zhang, Zihan Sha, Mingyang Sun, Yuchen Zhou, Wenyu Zhu, Wenju Sun, Han Qiu, Xi Xiao
At the core, our approach boosts superior transfer learning capabilities by effectively aligning binary code with their semantics explanations (in natural language), resulting a model able to generate better embeddings for binary code.
1 code implementation • 26 Feb 2024 • Hao Wang, Shengda Luo, Guosheng Hu, JianGuo Zhang
In aid of this indicator, we present a novel Gradient-guided Modality Decoupling (GMD) method to decouple the dependency on dominating modalities.
no code implementations • 25 Feb 2024 • Hao Wang, Hao Li, Minlie Huang, Lei Sha
In addition, our approach can be generalized into a broader method for generating transferable adversarial suffixes that can successfully attack multiple LLMs, even black-box LLMs, such as ChatGPT and Gemini.
no code implementations • 25 Feb 2024 • Tianyu Chen, Haoyi Zhou, Ying Li, Hao Wang, Chonghan Gao, Shanghang Zhang, JianXin Li
Foundation models have revolutionized knowledge acquisition across domains, and our study introduces OmniArch, a paradigm-shifting approach designed for building foundation models in multi-physics scientific computing.
no code implementations • 22 Feb 2024 • Ziqi Yin, Hao Wang, Kaito Horio, Daisuke Kawahara, Satoshi Sekine
We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs).
2 code implementations • 20 Feb 2024 • Qianqian Xie, Weiguang Han, Zhengyu Chen, Ruoyu Xiang, Xiao Zhang, Yueru He, Mengxi Xiao, Dong Li, Yongfu Dai, Duanyu Feng, Yijing Xu, Haoqiang Kang, Ziyan Kuang, Chenhan Yuan, Kailai Yang, Zheheng Luo, Tianlin Zhang, Zhiwei Liu, Guojun Xiong, Zhiyang Deng, Yuechen Jiang, Zhiyuan Yao, Haohang Li, Yangyang Yu, Gang Hu, Jiajia Huang, Xiao-Yang Liu, Alejandro Lopez-Lira, Benyou Wang, Yanzhao Lai, Hao Wang, Min Peng, Sophia Ananiadou, Jimin Huang
This along with the rapid development of LLMs, highlights the urgent need for a systematic financial evaluation benchmark for LLMs.
no code implementations • 15 Feb 2024 • Diederick Vermetten, Carola Doerr, Hao Wang, Anna V. Kononova, Thomas Bäck
The number of proposed iterative optimization heuristics is growing steadily, and with this growth, there have been many points of discussion within the wider community.
no code implementations • 10 Feb 2024 • Behzad Akbari, Mingfeng Yuan, Hao Wang, Haibin Zhu, Jinjun Shan
In the field of Multi-Agent Systems (MAS), known for their openness, dynamism, and cooperative nature, the ability to trust the resources and services of other agents is crucial.
1 code implementation • 8 Feb 2024 • Hengguan Huang, Songtao Wang, Hongfu Liu, Hao Wang, Ye Wang
To construct the ChatCoach system, we developed a dataset and integrated Large Language Models such as ChatGPT and Llama2, aiming to assess their effectiveness in communicative medical coaching tasks.
no code implementations • 7 Feb 2024 • Mengqi Chen, Bin Guo, Hao Wang, Haoyu Li, Qian Zhao, Jingqi Liu, Yasan Ding, Yan Pan, Zhiwen Yu
To depict the research trends of CogAgent, in this paper, we first present several fundamental cognitive psychology theories and give the formalized definition of three typical cognitive strategies, including the persuasion strategy, the topic path planning strategy, and the argument structure prediction strategy.
no code implementations • 6 Feb 2024 • Hao Wang, Xin Zhang, JinZhe Jiang, YaQian Zhao, Chen Li
However, it has been shown that multimodal NLP are vulnerable to adversarial attacks, where the outputs of a model can be dramatically changed by a perturbation to the input.
no code implementations • 6 Feb 2024 • Hao Wang, Lei Sha
The proposed approach aims to enhance the fluency of generated text by guiding the generation process with PPCs.
no code implementations • 5 Feb 2024 • Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
As Large Language Models (LLMs) have shown significant intelligence, the progress to leverage LLMs as planning modules of autonomous agents has attracted more attention.
1 code implementation • 4 Feb 2024 • Hao Wang, Licheng Pan, Zhichao Chen, Degui Yang, Sen Zhang, Yifei Yang, Xinggao Liu, Haoxuan Li, DaCheng Tao
Time series modeling is uniquely challenged by the presence of autocorrelation in both historical and label sequences.
1 code implementation • 3 Feb 2024 • Guang-Yuan Hao, Hengguan Huang, Haotian Wang, Jie Gao, Hao Wang
In this paper, we propose the first general method, dubbed composite active learning (CAL), for multi-domain AL. Our approach explicitly considers the domain-level and instance-level information in the problem; CAL first assigns domain-level budgets according to domain-level importance, which is estimated by optimizing an upper error bound that we develop; with the domain-level budgets, CAL then leverages a certain instance-level query strategy to select samples to label from each domain.
no code implementations • 2 Feb 2024 • Guang-Yuan Hao, Jiji Zhang, Biwei Huang, Hao Wang, Kun Zhang
Counterfactual reasoning is pivotal in human cognition and especially important for providing explanations and making decisions.
1 code implementation • 1 Feb 2024 • Zelong Li, Wenyue Hua, Hao Wang, He Zhu, Yongfeng Zhang
A stack-based LLM plan generation process is then conducted under the supervision of the automaton to ensure that the generated plan satisfies the constraints, making the planning process controllable.
no code implementations • 29 Jan 2024 • Jinhao Li, Ruichang Zhang, Hao Wang, Zhi Liu, Hongyang Lai, Yanru Zhang
Renewable energy resources (RERs) have been increasingly integrated into distribution networks (DNs) for decarbonization.
no code implementations • 29 Jan 2024 • Xiangzhao Qin, Sha Hu, Jiankun Zhang, Jing Qian, Hao Wang
Deep learning (DL) based channel estimation (CE) and multiple input and multiple output detection (MIMODet), as two separate research topics, have provided convinced evidence to demonstrate the effectiveness and robustness of artificial intelligence (AI) for receiver design.
1 code implementation • 29 Jan 2024 • Hao Wang, Tao Xiang, Shangwei Guo, Jialing He, Hangcheng Liu, Tianwei Zhang
Adopting untrusted PTMs may suffer from backdoor attacks, where the adversary can compromise the downstream models by injecting backdoors into the PTM.
no code implementations • 29 Jan 2024 • Jinhao Li, Changlong Wang, Hao Wang
This paper studies the synergy of solar-battery energy storage system (BESS) and develops a viable strategy for the BESS to unlock its economic potential by serving as a backup to reduce solar curtailments while also participating in the electricity market.
no code implementations • 27 Jan 2024 • Yuxin Liang, Zhuoyang Song, Hao Wang, Jiaxing Zhang
We evaluate the ability of Large Language Models (LLMs) to discern and express their internal knowledge state, a key factor in countering factual hallucination and ensuring reliable application of LLMs.
no code implementations • 26 Jan 2024 • Ashish Bastola, Julian Brinkley, Hao Wang, Abolfazl Razi
This paper presents a comprehensive literature review of the current state of in-vehicle human-computer interaction (HCI) in the context of self-driving vehicles, with a specific focus on inclusion and accessibility.
no code implementations • 25 Jan 2024 • Chaofan Pan, Xin Yang, Hao Wang, Wei Wei, Tianrui Li
Despite the progress in continual reinforcement learning (CRL), existing methods often suffer from insufficient knowledge transfer, particularly when the tasks are diverse.
1 code implementation • 25 Jan 2024 • Xinyue Xu, Yi Qin, Lu Mi, Hao Wang, Xiaomeng Li
Our ECBMs address both limitations of existing CBMs, providing higher accuracy and richer concept interpretations.
no code implementations • 24 Jan 2024 • Yangsen Chen, Hao Wang
The accurate 3D reconstruction of deformable soft body tissues from endoscopic videos is a pivotal challenge in medical applications such as VR surgery and medical image analysis.
no code implementations • 24 Jan 2024 • Yunfan Zhang, Hong Huang, Zhiwei Xiong, Zhiqi Shen, Guosheng Lin, Hao Wang, Nicholas Vun
The core strength of our pipeline lies in its ability to generate 3D scenes that are not only visually impressive but also exhibit features like photorealism, multi-view consistency, and diversity.
1 code implementation • 23 Jan 2024 • Chengzhi Mao, Carl Vondrick, Hao Wang, Junfeng Yang
We find that large language models (LLMs) are more likely to modify human-written text than AI-generated text when tasked with rewriting.
no code implementations • 18 Jan 2024 • Hao Wang
Fairness is a popular research topic in recent years.
no code implementations • 18 Jan 2024 • Hao Wang, Shuhei Kurita, Shuichiro Shimizu, Daisuke Kawahara
Audio-visual speech recognition (AVSR) is a multimodal extension of automatic speech recognition (ASR), using video as a complement to audio.
Audio-Visual Speech Recognition Automatic Speech Recognition +4
no code implementations • 16 Jan 2024 • Gengyue Han, Xiaohan Liu, Xianyue Peng, Hao Wang, Yu Han
This study introduces CycLight, a novel cycle-level deep reinforcement learning (RL) approach for network-level adaptive traffic signal control (NATSC) systems.
no code implementations • 11 Jan 2024 • Xiwen Chen, Hao Wang, Zhao Zhang, Zhenmin Li, Huayu Li, Tong Ye, Abolfazl Razi
Untrained Physics-based Deep Learning (DL) methods for digital holography have gained significant attention due to their benefits, such as not requiring an annotated training dataset, and providing interpretability since utilizing the governing laws of hologram formation.
no code implementations • 29 Dec 2023 • Hao Wang, Bo Tang, Chi Harold Liu, Shangqin Mao, Jiahong Zhou, Zipeng Dai, Yaqi Sun, Qianlong Xie, Xingxing Wang, Dong Wang
Online display advertising platforms service numerous advertisers by providing real-time bidding (RTB) for the scale of billions of ad requests every day.
no code implementations • 27 Dec 2023 • Xin Yang, Hao Yu, Xin Gao, Hao Wang, Junbo Zhang, Tianrui Li
The key objective of FCL is to fuse heterogeneous knowledge from different clients and retain knowledge of previous tasks while learning on new ones.
no code implementations • 25 Dec 2023 • Hao Wang, Huabing Zhou, Yanduo Zhang, Tao Lu, Jiayi Ma
Scene text spotting is essential in various computer vision applications, enabling extracting and interpreting textual information from images.
no code implementations • 24 Dec 2023 • Ming Yan, Ruihao Li, Hao Zhang, Hao Wang, Zhilan Yang, Ji Yan
Language agents have shown impressive problem-solving skills within defined settings and brief timelines.
no code implementations • 24 Dec 2023 • Rui Zhou, Haiyang Zhang, Hao Wang, Jin He, Qijun Huang, Sheng Chang
By integrating the local voltage-controlled magnetic anisotropy (VCMA) effect, Dzyaloshinskii-Moriya interaction (DMI) effect, and spin-orbit torque (SOT) effect, we propose a novel device structure for field-free magnetic tunnel junction (MTJ).
no code implementations • 22 Dec 2023 • Yujie Li, Xin Yang, Hao Wang, Xiangkun Wang, Tianrui Li
This paper studies the problem of continual learning in an open-world scenario, referred to as Open-world Continual Learning (OwCL).
1 code implementation • 22 Dec 2023 • Honghao Fu, Zhiqi Shen, Jing Jih Chin, Hao Wang
This leads to substantial limitations in existing works of visual stimuli reconstruction from EEG, such as difficulties in aligning EEG embeddings with the fine-grained semantic information and a heavy reliance on additional large self-collected dataset for training.
2 code implementations • 20 Dec 2023 • Weibo Gao, Qi Liu, Hao Wang, Linan Yue, Haoyang Bi, Yin Gu, Fangzhou Yao, Zheng Zhang, Xin Li, Yuanjing He
Consequently, we refine the cognitive states of cold-start students as diagnostic outcomes via virtual data, aligning with the diagnosis-oriented goal.
no code implementations • 19 Dec 2023 • Peishen Yan, Hao Wang, Tao Song, Yang Hua, Ruhui Ma, Ningxin Hu, Mohammad R. Haghighat, Haibing Guan
Specifically, the FL server applies parameter-level masks to model updates uploaded by clients and trains the masks over a small clean dataset (i. e., root dataset) to learn the subtle difference between benign and malicious model updates in a high-dimension space.
no code implementations • 16 Dec 2023 • Hao Wang
In this paper, we rely on the theory developed by Wang from 2021 to 2023 to demonstrate that online cultural rating platform rating data often evolve into Poisson/Pareto behavior, and individualistic voting preferences are predictable without any data input, so Borda Count Method (or, Range Voting Method) has intrinsic fallacy and should not be used as a voting theory method.
no code implementations • 16 Dec 2023 • Lyudong Jin, Ming Tang, Meng Zhang, Hao Wang
The uncertain edge load dynamics, the nature of the fractional objective, and hybrid continuous-discrete action space (due to the joint optimization) make this problem challenging and existing approaches not directly applicable.
1 code implementation • 12 Dec 2023 • Chenghao Huang, Siyang Li, Ruohong Liu, Hao Wang, Yize Chen
Foundation models, such as Large Language Models (LLMs), can respond to a wide range of format-free queries without any task-specific data collection or model training, creating various research and application opportunities for the modeling and operation of large-scale power systems.
1 code implementation • 8 Dec 2023 • Jianqing Zhang, Yang Liu, Yang Hua, Hao Wang, Tao Song, Zhengui Xue, Ruhui Ma, Jian Cao
Amid the ongoing advancements in Federated Learning (FL), a machine learning paradigm that allows collaborative learning with data privacy protection, personalized FL (pFL) has gained significant prominence as a research direction within the FL domain.
1 code implementation • 6 Dec 2023 • Tianshu Wang, Hongyu Lin, Xianpei Han, Le Sun, Xiaoyang Chen, Hao Wang, Zhenyu Zeng
Text-to-SQL simplifies database interactions by enabling non-experts to convert their natural language (NL) questions into Structured Query Language (SQL) queries.
no code implementations • 4 Dec 2023 • Cameron Martin, Fucai Ke, Hao Wang
Our experimental results demonstrate high-accuracy EV charging detection at the feeder level, achieving an F-Score of 98. 88% in offline detection and 93. 01% in online detection.
no code implementations • 4 Dec 2023 • Jiarong Fan, Hao Wang
In response to the growing uptake of distributed energy resources (DERs), community batteries have emerged as a promising solution to support renewable energy integration, reduce peak load, and enhance grid reliability.
no code implementations • 2 Dec 2023 • Qipan Xu, Youlong Ding, Xinxi Zhang, Jie Gao, Hao Wang
Data privacy protection is garnering increased attention among researchers.
no code implementations • 27 Nov 2023 • Zherui Chen, Yuchen Lu, Hao Wang, Yizhou Liu, Tongyang Li
Finally, based on the observations when comparing QLD with classical Fokker-Plank-Smoluchowski equation, we propose a time-dependent QLD by making temperature and $\hbar$ time-dependent parameters, which can be theoretically proven to converge better than the time-independent case and also outperforms a series of state-of-the-art quantum and classical optimization algorithms in many non-convex landscapes.
2 code implementations • NeurIPS 2023 • Jianqing Zhang, Yang Hua, Jian Cao, Hao Wang, Tao Song, Zhengui Xue, Ruhui Ma, Haibing Guan
Recently, federated learning (FL) is popular for its privacy-preserving and collaborative learning abilities.
1 code implementation • 21 Nov 2023 • Zeyu Gao, Hao Wang, Yuchen Zhou, Wenyu Zhu, Chao Zhang
Given the significant successes of large language models (LLMs) in various tasks, there is growing anticipation of their efficacy in vulnerability detection.
no code implementations • 19 Nov 2023 • Jiajie Fan, Laure Vuaille, Thomas Bäck, Hao Wang
We delve into the impact of noise schedules of diffusion models on the plausibility of the outcome: there exists a range of noise levels at which the model's performance decides the result plausibility.
no code implementations • 15 Nov 2023 • Hao Wang
As a preprocessing step to recommender system algorithms, histogram equalization could enhance both the accuracy and fairness metrics of the recommender system algorithms.
no code implementations • 9 Nov 2023 • Rui Xu, Wenkang Qin, Peixiang Huang, Hao Wang, Lin Luo
Deep Neural Networks (DNNs) are expected to provide explanation for users to understand their black-box predictions.
no code implementations • 6 Nov 2023 • Ruyi Gan, Ziwei Wu, Renliang Sun, Junyu Lu, XiaoJun Wu, Dixiang Zhang, Kunhao Pan, Junqing He, Yuanhe Tian, Ping Yang, Qi Yang, Hao Wang, Jiaxing Zhang, Yan Song
Although many such issues are addressed along the line of research on LLMs, an important yet practical limitation is that many studies overly pursue enlarging model sizes without comprehensively analyzing and optimizing the use of pre-training data in their learning process, as well as appropriate organization and leveraging of such data in training LLMs under cost-effective settings.
1 code implementation • 6 Nov 2023 • Mingjia Yin, Hao Wang, Xiang Xu, Likang Wu, Sirui Zhao, Wei Guo, Yong liu, Ruiming Tang, Defu Lian, Enhong Chen
To this end, we propose a graph-driven framework, named Adaptive and Personalized Graph Learning for Sequential Recommendation (APGL4SR), that incorporates adaptive and personalized global collaborative information into sequential recommendation systems.
1 code implementation • 2 Nov 2023 • Yining Ye, Xin Cong, Shizuo Tian, Jiannan Cao, Hao Wang, Yujia Qin, Yaxi Lu, Heyang Yu, Huadong Wang, Yankai Lin, Zhiyuan Liu, Maosong Sun
Empirical experiments are conducted to detail its construction and execution procedure of workflow, showcasing the feasibility of APA, unveiling the possibility of a new paradigm of automation driven by agents.
1 code implementation • 30 Oct 2023 • Ziqian Lin, Hao Ding, Nghia Trong Hoang, Branislav Kveton, Anoop Deoras, Hao Wang
In particular, we propose to develop a generic recommender that captures universal interaction patterns by training on generic user-item interaction data extracted from different domains, which can then be fast adapted to improve few-shot learning performance in unseen new domains (with limited data).
no code implementations • 28 Oct 2023 • Hao Wang
Human culture has evolved for thousands of years and thrived in the era of Internet.
no code implementations • 28 Oct 2023 • Hao Wang, Zhi-Qi Cheng, Jingdong Sun, Xin Yang, Xiao Wu, Hongyang Chen, Yan Yang
Multi-view or even multi-modal data is appealing yet challenging for real-world applications.
1 code implementation • 28 Oct 2023 • Hao Wang, Euijoon Ahn, Lei Bi, Jinman Kim
The clinical diagnosis of skin lesion involves the analysis of dermoscopic and clinical modalities.
no code implementations • 24 Oct 2023 • Yuxiang Wang, Xiao Yan, Chuang Hu, Fangcheng Fu, Wentao Zhang, Hao Wang, Shuo Shang, Jiawei Jiang
For graph self-supervised learning (GSSL), masked autoencoder (MAE) follows the generative paradigm and learns to reconstruct masked graph edges or node features.
1 code implementation • 24 Oct 2023 • Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, Enhong Chen
Hallucination is a big shadow hanging over the rapidly evolving Multimodal Large Language Models (MLLMs), referring to the phenomenon that the generated text is inconsistent with the image content.
no code implementations • 23 Oct 2023 • Hao Wang, Xiahua Chen, Rui Wang, Chenhui Chu
Extracting meaningful entities belonging to predefined categories from Visually-rich Form-like Documents (VFDs) is a challenging task.
1 code implementation • 23 Oct 2023 • Yihuai Lan, Zhiqiang Hu, Lei Wang, Yang Wang, Deheng Ye, Peilin Zhao, Ee-Peng Lim, Hui Xiong, Hao Wang
To achieve this goal, we adopt Avalon, a representative communication game, as the environment and use system prompts to guide LLM agents to play the game.
1 code implementation • IEEE Journal of Biomedical and Health Informatics 2023 • Huaicheng Zhang, Wenhan Liu, Sheng Chang, Hao Wang, Jin He, Qijun Huang
When applying DL models, ECG signals are usually treated as synchronized signals arranged in Euclidean space, which is the abstraction and generalization of real space.
1 code implementation • 23 Oct 2023 • Hao Wang, Qingxuan Wang, Yue Li, Changqing Wang, Chenhui Chu, Rui Wang
The use of visually-rich documents (VRDs) in various fields has created a demand for Document AI models that can read and comprehend documents like humans, which requires the overcoming of technical, linguistic, and cognitive barriers.
no code implementations • 20 Oct 2023 • Xu Huang, Jianxun Lian, Hao Wang, Defu Lian, Xing Xie
Recommendation systems effectively guide users in locating their desired information within extensive content repositories.
no code implementations • 18 Oct 2023 • Baofu Fang, Caiming Zheng, Hao Wang
To eliminate this assumption and achieve agent modeling in unknown scenarios, Fact-based Agent modeling (FAM) method is proposed in which fact-based belief inference (FBI) network models other agents in partially observable environment only based on its local information.
no code implementations • 17 Oct 2023 • Xianyue Peng, Hang Gao, Hao Wang, H. Michael Zhang
Over the years, reinforcement learning has emerged as a popular approach to develop signal control and vehicle platooning strategies either independently or in a hierarchical way.
no code implementations • 16 Oct 2023 • Xianyue Peng, Hang Gao, Gengyue Han, Hao Wang, Michael Zhang
In this paper, we propose a joint optimization approach for traffic signal control and vehicle routing in signalized road networks.
no code implementations • 15 Oct 2023 • Haoyuan Sun, Navid Azizan, Akash Srivastava, Hao Wang
When machine learning models are trained on synthetic data and then deployed on real data, there is often a performance drop due to the distribution shift between synthetic and real data.
no code implementations • 14 Oct 2023 • Hao Wang, Qiang Song, Ruofeng Yin, Rui Ma, Yizhou Yu, Yi Chang
In this paper, we propose B-Spine, a novel deep learning pipeline to learn B-spline curve representation of the spine and estimate the Cobb angles for spinal curvature estimation from low-quality X-ray images.
1 code implementation • 9 Oct 2023 • Yongfu Dai, Duanyu Feng, Jimin Huang, Haochen Jia, Qianqian Xie, Yifang Zhang, Weiguang Han, Wei Tian, Hao Wang
Through automated evaluation of current general and legal domain LLMs on our benchmark, we indicate that these LLMs may not align with the logic of legal practice.
no code implementations • 9 Oct 2023 • Yong Lin, Fan Zhou, Lu Tan, Lintao Ma, Jiameng Liu, Yansu He, Yuan Yuan, Yu Liu, James Zhang, Yujiu Yang, Hao Wang
To address this challenge, we then propose Continuous Invariance Learning (CIL), which extracts invariant features across continuously indexed domains.
1 code implementation • 1 Oct 2023 • Duanyu Feng, Yongfu Dai, Jimin Huang, Yifang Zhang, Qianqian Xie, Weiguang Han, Zhengyu Chen, Alejandro Lopez-Lira, Hao Wang
We then propose the first Credit and Risk Assessment Large Language Model (CALM) by instruction tuning, tailored to the nuanced demands of various financial risk assessment tasks.
no code implementations • 29 Sep 2023 • Mengke Zhang, Tianxing He, Tianle Wang, Lu Mi, FatemehSadat Mireshghallah, Binyi Chen, Hao Wang, Yulia Tsvetkov
In the current user-server interaction paradigm of prompted generation with large language models (LLM) on cloud, the server fully controls the generation process, which leaves zero options for users who want to keep the generated text to themselves.
no code implementations • 24 Sep 2023 • Hao Wang, Omkar Salunkhe, Walter Quadrini, Dan Lämkull, Fredrik Ore, Mélanie Despeisse, Luca Fumagalli, Johan Stahre, Björn Johansson
This article provides a systematic literature review of computer vision applications in robotized wire harness assembly.
no code implementations • 24 Sep 2023 • Hao Wang, Omkar Salunkhe, Walter Quadrini, Dan Lämkull, Fredrik Ore, Björn Johansson, Johan Stahre
This paradigm shift places more demand on automotive wire harnesses from the safety perspective and stresses the greater importance of high-quality wire harness assembly in vehicles.
no code implementations • 24 Sep 2023 • Hao Wang, Björn Johansson
The mating of connectors is essential in the final assembly of automotive wire harnesses due to the importance of connectors on wire harness connection and signal transmission.
no code implementations • 23 Sep 2023 • Zhichao Chen, Leilei Ding, Zhixuan Chu, Yucheng Qi, Jianmin Huang, Hao Wang
Time-Series Forecasting based on Cumulative Data (TSFCD) is a crucial problem in decision-making across various industrial scenarios.
1 code implementation • 22 Sep 2023 • Jia Qi Yip, Shengkui Zhao, Yukun Ma, Chongjia Ni, Chong Zhang, Hao Wang, Trung Hieu Nguyen, Kun Zhou, Dianwen Ng, Eng Siong Chng, Bin Ma
Dual-path is a popular architecture for speech separation models (e. g. Sepformer) which splits long sequences into overlapping chunks for its intra- and inter-blocks that separately model intra-chunk local features and inter-chunk global relationships.
Ranked #5 on Speech Separation on WSJ0-2mix
no code implementations • 18 Sep 2023 • Hao Wang, Libo Zhang, Heng Fan, Tiejian Luo
Meanwhile, we propose a cross-granularity attention module to align the interactions modeled by the three branches of transformers, then the three branches of transformers can support each other to exploit the most discriminative semantic information of different granularities for accurate predictions of captions.
no code implementations • 5 Sep 2023 • Yunqi Wang, Hao Wang, Markus Wagner, Ariel Liebman
The results show significant improvements in voltage regulation and DER utilization, demonstrating the potential of C-BESS in enabling more reliable DN operation.
no code implementations • 27 Aug 2023 • Jinhao Li, Yu Hui Yuan, Qiushi Cui, Hao Wang
Therefore, we are motivated to develop a comprehensive multi-objective framework for optimal CS placement in a traffic network overlaid by a distribution network, considering multiple stakeholders' interested factors, such as traffic flow, PEV charging time cost, PEV travel distance, and the reliability of the distribution network.
no code implementations • 27 Aug 2023 • Jiarong Fan, Hao Wang, Ariel Liebman
This paper addresses the EV charging coordination by considering vehicle-to-vehicle (V2V) energy exchange as the flexibility to harness in EV charging stations.
no code implementations • 27 Aug 2023 • Xinyu Liang, Hao Wang
Residential occupancy detection has become an enabling technology in today's urbanized world for various smart home applications, such as building automation, energy management, and improved security and comfort.
1 code implementation • 24 Aug 2023 • Wenyu Zhu, Hao Wang, Yuchen Zhou, JiaMing Wang, Zihan Sha, Zeyu Gao, Chao Zhang
By feeding explicit knowledge as additional inputs to the Transformer, and fusing implicit knowledge with a novel pre-training task, kTrans provides a new perspective to incorporating domain knowledge into a Transformer framework.
1 code implementation • 24 Aug 2023 • Qiuyu Zhu, Hao Wang, Xuewen Zu, Chengfei Liu
Considering that there are many layers in CNN, through experimental comparison and analysis, MFD Loss acts on multiple front layers of CNN, constrains the output features of each layer and each channel, and performs supervision training jointly with classification loss function during network training.
3 code implementations • ICCV 2023 • Jianqing Zhang, Yang Hua, Hao Wang, Tao Song, Zhengui Xue, Ruhui Ma, Jian Cao, Haibing Guan
Federated Learning (FL) is popular for its privacy-preserving and collaborative learning capabilities.
1 code implementation • 18 Aug 2023 • Cheng Li, Ziang Leng, Chenxi Yan, Junyi Shen, Hao Wang, Weishi MI, Yaying Fei, Xiaoyang Feng, Song Yan, HaoSheng Wang, Linkang Zhan, Yaokai Jia, Pingyu Wu, Haozhen Sun
Role-playing chatbots built on large language models have drawn interest, but better techniques are needed to enable mimicking specific fictional characters.
no code implementations • 15 Aug 2023 • Likang Wu, Junji Jiang, Hongke Zhao, Hao Wang, Defu Lian, Mengdi Zhang, Enhong Chen
However, the multi-faceted semantic orientation in the feature-semantic alignment has been neglected by previous work, i. e. the content of a node usually covers diverse topics that are relevant to the semantics of multiple labels.
no code implementations • 9 Aug 2023 • Liping Wang, Jiawei Li, Lifan Zhao, Zhizhuo Kou, Xiaohan Wang, Xinyi Zhu, Hao Wang, Yanyan Shen, Lei Chen
Predicting stock prices presents a challenging research problem due to the inherent volatility and non-linear nature of the stock market.
no code implementations • 8 Aug 2023 • Haomin Zhuang, Mingxian Yu, Hao Wang, Yang Hua, Jian Li, Xu Yuan
Federated learning (FL) has been widely deployed to enable machine learning training on sensitive data across distributed devices.
no code implementations • 5 Aug 2023 • Hao Wang, Jianxun Lian, Mingqi Wu, Haoxuan Li, Jiajun Fan, Wanyue Xu, Chaozhuo Li, Xing Xie
Sequential user modeling, a critical task in personalized recommender systems, focuses on predicting the next item a user would prefer, requiring a deep understanding of user behavior sequences.
no code implementations • ICCV 2023 • Jinjing Zhu, Yunhao Luo, Xu Zheng, Hao Wang, Lin Wang
In this paper, we strive to answer the question "how to collaboratively learn convolutional neural network (CNN)-based and vision transformer (ViT)-based models by selecting and exchanging the reliable knowledge between them for semantic segmentation?"
no code implementations • 24 Jul 2023 • Lei Cai, Hao Wang, Congling Zhou, Yongqiang Wang, Boyu Liu
To solve the problem that the feature information of pole-like obstacles in complex environments is easily lost, resulting in low detection accuracy and low real-time performance, a multi-scale hybrid attention mechanism detection algorithm is proposed in this paper.
1 code implementation • NeurIPS 2023 • Marcel Kollovieh, Abdul Fatir Ansari, Michael Bohlke-Schneider, Jasper Zschiegner, Hao Wang, Yuyang Wang
Prior works on time series diffusion models have primarily focused on developing conditional models tailored to specific forecasting or imputation tasks.
no code implementations • 19 Jul 2023 • Xiaohong Liu, Xiongkuo Min, Wei Sun, Yulun Zhang, Kai Zhang, Radu Timofte, Guangtao Zhai, Yixuan Gao, Yuqin Cao, Tengchuan Kou, Yunlong Dong, Ziheng Jia, Yilin Li, Wei Wu, Shuming Hu, Sibin Deng, Pengxiang Xiao, Ying Chen, Kai Li, Kai Zhao, Kun Yuan, Ming Sun, Heng Cong, Hao Wang, Lingzhi Fu, Yusheng Zhang, Rongyu Zhang, Hang Shi, Qihang Xu, Longan Xiao, Zhiliang Ma, Mirko Agarla, Luigi Celona, Claudio Rota, Raimondo Schettini, Zhiwei Huang, Yanan Li, Xiaotao Wang, Lei Lei, Hongye Liu, Wei Hong, Ironhead Chuang, Allen Lin, Drake Guan, Iris Chen, Kae Lou, Willy Huang, Yachun Tasi, Yvonne Kao, Haotian Fan, Fangyuan Kong, Shiqi Zhou, Hao liu, Yu Lai, Shanshan Chen, Wenqi Wang, HaoNing Wu, Chaofeng Chen, Chunzheng Zhu, Zekun Guo, Shiling Zhao, Haibing Yin, Hongkui Wang, Hanene Brachemi Meftah, Sid Ahmed Fezza, Wassim Hamidouche, Olivier Déforges, Tengfei Shi, Azadeh Mansouri, Hossein Motamednia, Amir Hossein Bakhtiari, Ahmad Mahmoudi Aznaveh
61 participating teams submitted their prediction results during the development phase, with a total of 3168 submissions.
no code implementations • 19 Jul 2023 • Jiajie Fan, Laure Vuaille, Hao Wang, Thomas Bäck
The potential of SA-ALAE is shown by generating engineering blueprints in a real automotive design task.
1 code implementation • NeurIPS 2023 • Zhihan Gao, Xingjian Shi, Boran Han, Hao Wang, Xiaoyong Jin, Danielle Maddix, Yi Zhu, Mu Li, Yuyang Wang
We conduct empirical studies on two datasets: N-body MNIST, a synthetic dataset with chaotic behavior, and SEVIR, a real-world precipitation nowcasting dataset.
1 code implementation • 12 Jul 2023 • Hao Wang, Jiatai Lin, Danyi Li, Jing Wang, Bingchao Zhao, Zhenwei Shi, Xipeng Pan, Huadeng Wang, Bingbing Li, Changhong Liang, Guoqiang Han, Li Liang, Chu Han, Zaiyi Liu
And the feature diversity is preserved by inter- and intra- class feature diversity-preserved module (InCDP).
1 code implementation • 12 Jul 2023 • Hao Wang, Jiabei Zhu, Yunzhe Li, QianWan Yang, Lei Tian
Unlike traditional neural fields frameworks, LCNF incorporates a local conditional representation that promotes model generalization, learning multiscale information, and efficient processing of large-scale imaging data.
no code implementations • 6 Jul 2023 • Hao Wang
In 2007, a density-based clustering algorithm named DENCLUE was invented to solve clustering problem for nonlinear data structures.
no code implementations • 6 Jul 2023 • Hao Wang
Unlike other sectors such as fraud detection in the Fintech industry, recommender system is both deep and broad.
3 code implementations • 1 Jul 2023 • Jianqing Zhang, Yang Hua, Hao Wang, Tao Song, Zhengui Xue, Ruhui Ma, Haibing Guan
To address this, we propose the Federated Conditional Policy (FedCP) method, which generates a conditional policy for each sample to separate the global information and personalized information in its features and then processes them by a global head and a personalized head, respectively.
no code implementations • 30 Jun 2023 • Jianchao Ji, Zelong Li, Shuyuan Xu, Max Xiong, Juntao Tan, Yingqiang Ge, Hao Wang, Yongfeng Zhang
In this paper, we explore how the two reasoning abilities can be jointly modeled to enhance both accuracy and explainability of machine learning models.
1 code implementation • 28 Jun 2023 • Qingqiao Hu, Hao Wang, Jing Luo, Yunhao Luo, Zhiheng Zhangg, Jan S. Kirschke, Benedikt Wiestler, Bjoern Menze, JianGuo Zhang, Hongwei Bran Li
We introduce a novel Bayesian neural network-based architecture to estimate inter-rater uncertainty in medical image segmentation.
1 code implementation • 26 Jun 2023 • Zhiwei Xu, Hao Wang, Yanbin Liu, Stephen Gould
We explore two differentiable deep declarative layers, namely least squares on sphere (LESS) and implicit eigen decomposition (IED), for learning the principal matrix features (PMaF).
2 code implementations • 23 Jun 2023 • Hao Wang, Xiwen Chen, Natan Vital, Edward. Duffy, Abolfazl Razi
It takes only a total of 40 minutes for 5 epochs (about 7. 75 minutes per epoch) to train a network with superior performance and covering diverse conditions for its low-complexity architecture; therefore, it easily adapts to changes in the building setups, weather conditions, occupancy rate, etc.
1 code implementation • NeurIPS 2023 • Yansong Ning, Hao liu, Hao Wang, Zhenyu Zeng, Hui Xiong
We hope the proposed UUKG fosters research on urban knowledge graphs and broad smart city applications.
2 code implementations • 13 Jun 2023 • Tianyi Liu, Zihao Xu, Hao He, Guang-Yuan Hao, Guang-He Lee, Hao Wang
Domain adaptation aims to mitigate distribution shifts among different domains.
no code implementations • 11 Jun 2023 • Hao Wang
In this paper, we propose a fair recommender system algorithm that uses Poisson process and Skellam distribution.
no code implementations • 9 Jun 2023 • Jingquan Yan, Hao Wang
Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving.
no code implementations • 3 Jun 2023 • Hao Wang, Ruihong He, XiaoYu Zhang, Zhaoying Bian, Dong Zeng, Jianhua Ma
In this work, we propose a novel peer-to-peer federated continual learning strategy to improve low-dose CT imaging performance from multiple institutions.
1 code implementation • 31 May 2023 • Young-Jin Park, Hao Wang, Shervin Ardeshir, Navid Azizan
Quantifying the reliability of these representations is crucial, as many downstream models rely on them as input for their own tasks.
1 code implementation • 31 May 2023 • Likang Wu, Zhi Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, HengShu Zhu, Qi Liu, Hui Xiong, Enhong Chen
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP) and have recently gained significant attention in the domain of Recommendation Systems (RS).
no code implementations • 28 May 2023 • Youlong Ding, Xueyang Wu, Hao Wang, Weike Pan
The Transformer has emerged as a versatile and effective architecture with broad applications.
no code implementations • 25 May 2023 • Yuxin Huang, Hao Wang, Zhaoran Liu, Licheng Pan, Haozhe Li, Xinggao Liu
Accurate estimation of multiple quality variables is critical for building industrial soft sensor models, which have long been confronted with data efficiency and negative transfer issues.
1 code implementation • 22 May 2023 • Hao Wang, Hirofumi Shimizu, Daisuke Kawahara
To solve this problem, we construct the first Classical-Chinese-to-Kanbun dataset in the world.
no code implementations • 19 May 2023 • Yongsheng Yu, Hao Wang, Tiejian Luo, Heng Fan, Libo Zhang
In this paper, we propose a novel, simple yet effective method for Multi-modal Guided Image Completion, dubbed MaGIC, which not only supports a wide range of single modality as the guidance (e. g., text, canny edge, sketch, segmentation, depth, and pose), but also adapts to arbitrarily customized combination of these modalities (i. e., arbitrary multi-modality) for image completion.
no code implementations • 19 May 2023 • Shun Zhang, Haoran Sun, Runze Yu, Hongshenyuan Cui, Jian Ren, Feifei Gao, Shi Jin, Hongxiang Xie, Hao Wang
In particular, we adopt a self-developed broadband intelligent communication system 40MHz-Net (BICT-40N) terminal in order to fully acquire the channel information.
no code implementations • 9 May 2023 • Jiajun Fan, Yuzheng Zhuang, Yuecheng Liu, Jianye Hao, Bin Wang, Jiangcheng Zhu, Hao Wang, Shu-Tao Xia
The exploration problem is one of the main challenges in deep reinforcement learning (RL).
Ranked #1 on Atari Games on Atari-57
1 code implementation • 27 Apr 2023 • Jiutian Zhao, Liang Luo, Hao Wang
Comparative experimental results on both datasets show that SMC-2 outperforms Label Smoothing Regularizaion and Self-distillation From The Last Mini-batch on all models, and outperforms the state-of-the-art Sharpness-Aware Minimization method on 83% of the models. Compatibility of SMC-2 and data augmentation experimental results show that using both SMC-2 and data augmentation improves the generalization ability of the model between 0. 28% and 1. 80% compared to using only data augmentation.
no code implementations • 8 Apr 2023 • Yangyang Guo, Hao Wang, Lei He, Witold Pedrycz, P. N. Suganthan, Yanjie Song
The RL-GP adopts the ensemble population strategies.
no code implementations • 5 Apr 2023 • Aditya Khele, Canchen Jiang, Hao Wang
We formulate a cost minimization problem for an EV charging station to optimize the V2V schedule together with vehicle-to-grid (V2G), grid-to-vehicle (G2V) charging, as well as the use of renewable energy.
no code implementations • 5 Apr 2023 • Canchen Jiang, Ariel Liebman, Hao Wang
The EV coordination can provide value to the grid and generate benefits for EVs.
no code implementations • 5 Apr 2023 • Jinhao Li, Changlong Wang, Hao Wang
However, the variable nature of wind generation can undermine system reliability and lead to wind curtailment, causing substantial economic losses to wind power producers.
no code implementations • 28 Mar 2023 • Yunfan Zhang, Hao Wang, Guosheng Lin, Vun Chan Hua Nicholas, Zhiqi Shen, Chunyan Miao
This paper investigates an open research task of reconstructing and generating 3D point clouds.
no code implementations • 25 Mar 2023 • Hao Wang
Understanding the evolution pattern and its underlying mechanism is the key to understand the structures of input data for recommender systems.
no code implementations • 25 Mar 2023 • Hao Wang
We continue the research in this direction in this paper, and visualize the inner structure of the parameter space of matrix factorization technologies.
1 code implementation • CVPR 2023 • Jiacheng Wei, Hao Wang, Jiashi Feng, Guosheng Lin, Kim-Hui Yap
We conduct extensive experiments to analyze each of our proposed components and show the efficacy of our framework in generating high-fidelity 3D textured and text-relevant shapes.
no code implementations • 22 Mar 2023 • Hao Wang, Chen Li, JinZhe Jiang, Xin Zhang, YaQian Zhao, Weifeng Gong
Recently, the robustness of deep learning models has received widespread attention, and various methods for improving model robustness have been proposed, including adversarial training, model architecture modification, design of loss functions, certified defenses, and so on.
2 code implementations • 20 Mar 2023 • Hao Wang, Euijoon Ahn, Jinman Kim
These SSL approaches, however, are not designed for handling multi-resolution WSIs, which limits their performance in learning discriminative image features.
1 code implementation • 16 Mar 2023 • Tong Wu, Hao Wang, Zhongshen Zeng, Wei Wang, Hai-Tao Zheng, Jiaxing Zhang
Recently, there has been a surge in the use of generated data to enhance the performance of downstream models, largely due to the advancements in pre-trained language models.
no code implementations • 11 Mar 2023 • Hao Wang
One important sub-field of recommender systems that has been stagnating is context-aware recommender systems.
no code implementations • 11 Mar 2023 • Hao Wang
Topology is the foundation for many industrial applications ranging from CAD to simulation analysis.
no code implementations • 8 Mar 2023 • Hao Wang
Collaborative filtering is the simplest but oldest machine learning algorithm in the field of recommender systems.
no code implementations • 2 Mar 2023 • Hao Wang
Recommender system exists everywhere in the business world.
no code implementations • 1 Mar 2023 • Yongqiang Han, Likang Wu, Hao Wang, Guifeng Wang, Mengdi Zhang, Zhi Li, Defu Lian, Enhong Chen
Sequential Recommendation is a widely studied paradigm for learning users' dynamic interests from historical interactions for predicting the next potential item.
no code implementations • 25 Feb 2023 • Eli Bacher-Chong, Mostafa Ali Ayubirad, Zeng Qiu, Hao Wang, Alireza Goshtasbi, Hamid R. Ossareh
Compared with a single-input single-output (SISO) air-flow control approach, the proposed MIMO control approach shows up to 7. 36 percent lower hydrogen fuel consumption.
no code implementations • 21 Feb 2023 • Hao Wang, Zhiyu Wang, Yunlong Niu, Zhaoran Liu, Haozhe Li, Yilin Liao, Yuxin Huang, Xinggao Liu
An accurate and explainable automatic monitoring system is critical for the safety of high efficiency energy conversion plants that operate under extreme working condition.
no code implementations • 21 Feb 2023 • Licheng Pan, Hao Wang, Zhichao Chen, Yuxing Huang, Xinggao Liu
We further present a Task-aware Mixture-of-Experts framework for achieving the Pareto optimum (TMoE-P) in multi-variate soft sensor, which consists of a stacked OMoE module and a POR module.
no code implementations • 17 Feb 2023 • Diederick Vermetten, Hao Wang, Kevin Sim, Emma Hart
These features are then used to predict what algorithm to switch to.