no code implementations • Findings (NAACL) 2022 • Le Qi, Yu Zhang, Qingyu Yin, Guidong Zheng, Wen Junjie, Jinlong Li, Ting Liu
In this process, there are two kinds of critical information that are commonly employed: the representation information of original questions and the interactive information between pairs of questions.
no code implementations • ACL (dialdoc) 2021 • Jiapeng Li, Mingda Li, Longxuan Ma, Wei-Nan Zhang, Ting Liu
The task requires identifying the grounding knowledge in form of a document span for the next dialogue turn.
no code implementations • EMNLP 2020 • Wei Song, Kai Zhang, Ruiji Fu, Lizhen Liu, Ting Liu, Miaomiao Cheng
This paper proposes a pre-training based automated Chinese essay scoring method.
no code implementations • EMNLP 2020 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses.
1 code implementation • EMNLP 2021 • Jihao Shi, Xiao Ding, Li Du, Ting Liu, Bing Qin
Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses.
1 code implementation • EMNLP 2020 • Jiaqi Guo, Qian Liu, Jian-Guang Lou, Zhenwen Li, Xueqing Liu, Tao Xie, Ting Liu
Thus, the impact of meaning representation on semantic parsing is less understood.
no code implementations • EMNLP 2020 • Wei Song, Ziyao Song, Ruiji Fu, Lizhen Liu, Miaomiao Cheng, Ting Liu
First, we propose structural sentence positional encodings to explicitly represent sentence positions.
1 code implementation • COLING 2022 • Xiao Ding, Bowen Chen, Li Du, Bing Qin, Ting Liu
To fill the gap, we propose CogBERT, a framework that can induce fine-grained cognitive features from cognitive data and incorporate cognitive features into BERT by adaptively adjusting the weight of cognitive features for different NLP tasks.
no code implementations • Findings (EMNLP) 2021 • MengNan Qi, Hao liu, Yuzhuo Fu, Ting Liu
With the increasing abundance of meeting transcripts, meeting summary has attracted more and more attention from researchers.
1 code implementation • Findings (ACL) 2022 • Le Qi, Shangwen Lv, Hongyu Li, Jing Liu, Yu Zhang, Qiaoqiao She, Hua Wu, Haifeng Wang, Ting Liu
Open-domain question answering has been used in a wide range of applications, such as web search and enterprise search, which usually takes clean texts extracted from various formats of documents (e. g., web pages, PDFs, or Word documents) as the information source.
1 code implementation • Findings (EMNLP) 2021 • Jiaqi Guo, Jian-Guang Lou, Ting Liu, Dongmei Zhang
Using only 10% of utterance-denotation pairs, the parser achieves 84. 2 denotation accuracy on WikiSQL, which is competitive with the previous state-of-the-art approaches using 100% labeled data.
no code implementations • 7 Jun 2024 • Chen Liang, Qiang Guo, Chongkai Yu, Chengjing Wu, Ting Liu, Luoqi Liu
MVC enforces the consistency between predictions of masked frames where random patches are withheld.
no code implementations • 6 Jun 2024 • Ruipu Wu, Jifei Che, Han Li, Chengjing Wu, Ting Liu, Luoqi Liu
Video panoptic segmentation is an advanced task that extends panoptic segmentation by applying its concept to video sequences.
1 code implementation • 23 May 2024 • Ting Liu, Xuyang Liu, Liangtao Shi, Zunnan Xu, Siteng Huang, Yi Xin, Quanjun Yin
Sparse-Tuning efficiently fine-tunes the pre-trained ViT by sparsely preserving the informative tokens and merging redundant ones, enabling the ViT to focus on the foreground while reducing computational costs on background regions in the images.
no code implementations • 15 May 2024 • Guozhang Liu, Ting Liu, Mengke Yuan, Tao Pang, Guangxing Yang, Hao Fu, Tao Wang, Tongkui Liao
The ambiguous appearance, tiny scale, and fine-grained classes of objects in remote sensing imagery inevitably lead to the noisy annotations in category labels of detection dataset.
1 code implementation • 10 May 2024 • Ting Liu, Xuyang Liu, Siteng Huang, Honggang Chen, Quanjun Yin, Long Qin, Donglin Wang, Yue Hu
Specifically, we propose \textbf{DARA}, a novel PETL method comprising \underline{\textbf{D}}omain-aware \underline{\textbf{A}}dapters (DA Adapters) and \underline{\textbf{R}}elation-aware \underline{\textbf{A}}dapters (RA Adapters) for VG.
no code implementations • 5 May 2024 • Chengpeng Fu, Xiaocheng Feng, Yichong Huang, Wenshuai Huo, Baohang Li, Hui Wang, Bin Qin, Ting Liu
Leveraging large language models for machine translation has demonstrated promising results.
no code implementations • 20 Apr 2024 • Guohao Wang, Ting Liu, Hongqiang Lyu, Ze Liu
The result highlights the effectiveness of biological language model in capturing both the order (sequential) and functional meaning (semantics) within genomes.
1 code implementation • 19 Apr 2024 • Yichong Huang, Xiaocheng Feng, Baohang Li, Yang Xiang, Hui Wang, Bing Qin, Ting Liu
To address this challenge, DeePEn maps the probability distribution of each model from its own probability space to a universal relative space based on the relative representation theory, and performs aggregation.
1 code implementation • 2 Apr 2024 • Zhouhao Sun, Xiao Ding, Li Du, Bibo Cai, Jinglong Gao, Ting Liu, Qin Bing
To address this issue, we propose a novel framework, named Generalizable and Faithful Reasoner (GFaiR), which introduces the paradigm of resolution refutation.
1 code implementation • 25 Mar 2024 • Yirong Zeng, Xiao Ding, Yi Zhao, Xiangyu Li, Jie Zhang, Chao Yao, Ting Liu, Bing Qin
Furthermore, we construct RU22Fact, a novel multilingual explainable fact-checking dataset on the Russia-Ukraine conflict in 2022 of 16K samples, each containing real-world claims, optimized evidence, and referenced explanation.
1 code implementation • 23 Mar 2024 • Jiacheng Ruan, Jingsheng Gao, Mingye Xie, Daize Dong, Suncheng Xiang, Ting Liu, Yuzhuo Fu
Adapter-Tuning (AT) method involves freezing a pre-trained model and introducing trainable adapter modules to acquire downstream knowledge, thereby calibrating the model for better adaptation to downstream tasks.
no code implementations • 14 Mar 2024 • Kai Xiong, Xiao Ding, Ting Liu, Bing Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Yixin Cao
Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence.
no code implementations • 4 Mar 2024 • Nuwa Xi, Yuhan Chen, Sendong Zhao, Haochun Wang, Bing Qin, Ting Liu
Chain-of-Thought (CoT) serves as a critical emerging ability in LLMs, especially when it comes to logical reasoning.
no code implementations • 20 Feb 2024 • Long Zhao, Nitesh B. Gundavarapu, Liangzhe Yuan, Hao Zhou, Shen Yan, Jennifer J. Sun, Luke Friedman, Rui Qian, Tobias Weyand, Yue Zhao, Rachel Hornung, Florian Schroff, Ming-Hsuan Yang, David A. Ross, Huisheng Wang, Hartwig Adam, Mikhail Sirotenko, Ting Liu, Boqing Gong
We introduce VideoPrism, a general-purpose video encoder that tackles diverse video understanding tasks with a single frozen model.
no code implementations • 18 Feb 2024 • Yang Zhao, Li Du, Xiao Ding, Kai Xiong, Zhouhao Sun, Jun Shi, Ting Liu, Bing Qin
Through pretraining on a corpus with various sources, Large Language Models (LLMs) have gained impressive performance.
no code implementations • 16 Feb 2024 • Qi Shi, Han Cui, Haofeng Wang, Qingfu Zhu, Wanxiang Che, Ting Liu
Question answering over heterogeneous data requires reasoning over diverse sources of data, which is challenging due to the large scale of information and organic coupling of heterogeneous data.
no code implementations • 2 Feb 2024 • Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu
In the field of natural language processing (NLP), Large Language Models (LLMs) have precipitated a paradigm shift, markedly enhancing performance in natural language generation tasks.
Multiple-choice Multiple Choice Question Answering (MCQA) +1
no code implementations • 29 Jan 2024 • Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu
Automatic diagnosis is a significant application of AI in healthcare, where diagnoses are generated based on the symptom description of patients.
no code implementations • 11 Jan 2024 • Yue Zhao, Long Zhao, Xingyi Zhou, Jialin Wu, Chun-Te Chu, Hui Miao, Florian Schroff, Hartwig Adam, Ting Liu, Boqing Gong, Philipp Krähenbühl, Liangzhe Yuan
Our best model outperforms state-of-the-art methods on MSR-VTT zero-shot text-to-video retrieval by 6%.
no code implementations • 10 Jan 2024 • Yichong Huang, Xiaocheng Feng, Baohang Li, Chengpeng Fu, Wenshuai Huo, Ting Liu, Bing Qin
To align the translation-specific understanding to the general one, we propose a novel translation process xIoD (Cross-Lingual Interpretation of Difficult words), explicitly incorporating the general understanding on the content incurring inconsistent understanding to guide the translation.
no code implementations • 4 Jan 2024 • Zeyu Li, Jingsheng Gao, Tong Yu, Suncheng Xiang, Jiacheng Ruan, Ting Liu, Yuzhuo Fu
Existing research on audio classification faces challenges in recognizing attributes of passive underwater vessel scenarios and lacks well-annotated datasets due to data privacy concerns.
no code implementations • 28 Dec 2023 • Liang Zhao, Xiaocheng Feng, Xiachong Feng, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin, Ting Liu
In this survey, we present these advances towards length extrapolation in a unified notation from the perspective of PE.
1 code implementation • 13 Dec 2023 • Jingsheng Gao, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu, Yuzhuo Fu
We conduct experiments on 11 downstream vision datasets and demonstrate that our method significantly improves the performance of existing multi-modal prompt learning models in few-shot scenarios, exhibiting an average accuracy improvement of 2. 31(\%) compared to the state-of-the-art methods on 16 shots.
1 code implementation • 12 Dec 2023 • Jiacheng Ruan, Jingsheng Gao, Mingye Xie, Suncheng Xiang, Zefang Yu, Ting Liu, Yuzhuo Fu
2) They neglect the interaction between the intrinsic task-agnostic knowledge of pre-trained models and the task-specific knowledge in downstream tasks.
no code implementations • 29 Nov 2023 • Ting Liu, Yue Hu, Wansen Wu, Youkai Wang, Kai Xu, Quanjun Yin
Then we introduce soft visual prompts in the input space of the visual encoder in a pretrained model.
no code implementations • 10 Nov 2023 • Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu
In this paper, we propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.
1 code implementation • 9 Nov 2023 • Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu
The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP), leading to remarkable advancements in text understanding and generation.
no code implementations • 3 Nov 2023 • Derek Jacoby, Donglin Xu, Weder Ribas, Minyi Xu, Ting Liu, Vishwanath Jayaraman, Mengdi Wei, Emma De Blois, Yvonne Coady
Since their introduction in 2020, Neural Radiance Fields (NeRFs) have taken the computer vision community by storm.
no code implementations • 1 Nov 2023 • You Zhou, Xiujing Lin, Xiang Zhang, Maolin Wang, Gangwei Jiang, Huakang Lu, Yupeng Wu, Kai Zhang, Zhe Yang, Kehang Wang, Yongduo Sui, Fengwei Jia, Zuoli Tang, Yao Zhao, Hongxuan Zhang, Tiannuo Yang, Weibo Chen, Yunong Mao, Yi Li, De Bao, Yu Li, Hongrui Liao, Ting Liu, Jingwen Liu, Jinchi Guo, Xiangyu Zhao, Ying WEI, Hong Qian, Qi Liu, Xiang Wang, Wai Kin, Chan, Chenliang Li, Yusen Li, Shiyu Yang, Jining Yan, Chao Mou, Shuai Han, Wuxia Jin, Guannan Zhang, Xiaodong Zeng
To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic.
1 code implementation • 8 Oct 2023 • Yushan Qian, Wei-Nan Zhang, Ting Liu
Empathetic dialogue is an indispensable part of building harmonious social relationships and contributes to the development of a helpful AI.
1 code implementation • 27 Sep 2023 • Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, Ting Liu
We hope this paper serves as an introduction for beginners and fosters future research.
1 code implementation • 8 Sep 2023 • Haochun Wang, Sendong Zhao, Chi Liu, Nuwa Xi, MuZhen Cai, Bing Qin, Ting Liu
Experimental results indicate that even without tuning any parameters, our LLE-INC is on par with automated verbalizers with parameter tuning.
1 code implementation • 8 Sep 2023 • Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu
To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation.
no code implementations • 7 Sep 2023 • Ting Liu, Yue Hu, Wansen Wu, Youkai Wang, Kai Xu, Quanjun Yin
In the indoor-aware stage, we apply an efficient tuning paradigm to learn deep visual prompts from an indoor dataset, in order to augment pretrained models with inductive biases towards indoor environments.
no code implementations • 2 Sep 2023 • Di Liu, Long Zhao, Qilong Zhangli, Yunhe Gao, Ting Liu, Dimitris N. Metaxas
The task of shape abstraction with semantic part consistency is challenging due to the complex geometries of natural objects.
1 code implementation • 26 Aug 2023 • Gongjin Lan, Zhenyu Gao, Lingyao Tong, Ting Liu
In this paper, we apply class binarization techniques to a neuroevolution algorithm, NeuroEvolution of Augmenting Topologies (NEAT), that is used to generate neural networks for multiclass classification.
no code implementations • ICCV 2023 • Jiawei Lin, Jiaqi Guo, Shizhao Sun, Weijiang Xu, Ting Liu, Jian-Guang Lou, Dongmei Zhang
To model combined and incomplete constraints, we use a Transformer-based layout generation model and carefully design a way to represent constraints and layouts as sequences.
1 code implementation • ICCV 2023 • Qitong Wang, Long Zhao, Liangzhe Yuan, Ting Liu, Xi Peng
To facilitate the data efficiency of multiview learning, we further perform video-text alignment for first-person and third-person videos, to fully leverage the semantic knowledge to improve video representations.
no code implementations • 15 Aug 2023 • Ziyu Zhuang, Qiguang Chen, Longxuan Ma, Mingda Li, Yi Han, Yushan Qian, Haopeng Bai, Zixian Feng, Weinan Zhang, Ting Liu
From pre-trained language model (PLM) to large language model (LLM), the field of natural language processing (NLP) has witnessed steep performance gains and wide practical uses.
no code implementations • 10 Aug 2023 • Guozhang Liu, Baochai Peng, Ting Liu, Pan Zhang, Mengke Yuan, Chaoran Lu, Ningning Cao, Sen Zhang, Simin Huang, Tao Wang
The diversity of building architecture styles of global cities situated on various landforms, the degraded optical imagery affected by clouds and shadows, and the significant inter-class imbalance of roof types pose challenges for designing a robust and accurate building roof instance segmentor.
no code implementations • 10 Aug 2023 • Chaoran Lu, Ningning Cao, Pan Zhang, Ting Liu, Baochai Peng, Guozhang Liu, Mengke Yuan, Sen Zhang, Simin Huang, Tao Wang
Unifying the correlative single-view satellite image building extraction and height estimation tasks indicates a promising way to share representations and acquire generalist model for large-scale urban 3D reconstruction.
1 code implementation • 17 Jul 2023 • Jiacheng Ruan, Mingye Xie, Jingsheng Gao, Ting Liu, Yuzhuo Fu
Moreover, to our best knowledge, this is the first model with a parameter count limited to just 50KB.
1 code implementation • 6 Jul 2023 • Liangzhe Yuan, Nitesh Bharadwaj Gundavarapu, Long Zhao, Hao Zhou, Yin Cui, Lu Jiang, Xuan Yang, Menglin Jia, Tobias Weyand, Luke Friedman, Mikhail Sirotenko, Huisheng Wang, Florian Schroff, Hartwig Adam, Ming-Hsuan Yang, Ting Liu, Boqing Gong
We evaluate existing foundation models video understanding capabilities using a carefully designed experiment protocol consisting of three hallmark tasks (action recognition, temporal localization, and spatiotemporal localization), eight datasets well received by the community, and four adaptation methods tailoring a foundation model (FM) for a downstream task.
no code implementations • 6 Jul 2023 • Nuwa Xi, Sendong Zhao, Haochun Wang, Chi Liu, Bing Qin, Ting Liu
In this paper, we propose fMRI2text, the first openvocabulary task aiming to bridge fMRI time series and human language.
1 code implementation • 9 Jun 2023 • Longxuan Ma, Weinan Zhang, Shuhan Zhou, Churui Sun, Changxin Ke, Ting Liu
Meanwhile, the MSD data can also be used on dialogue tasks to test the ability of dialogue systems when using similes.
1 code implementation • 19 May 2023 • Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin
Through extensive experiments on various datasets, LLMs can effectively collaborate to reach a consensus despite noticeable inter-inconsistencies, but imbalances in their abilities can lead to domination by superior LLMs.
1 code implementation • 18 May 2023 • Tingting Wu, Xiao Ding, Minji Tang, Hao Zhang, Bing Qin, Ting Liu
To mitigate the effects of label noise, learning with noisy labels (LNL) methods are designed to achieve better generalization performance.
1 code implementation • 12 May 2023 • Jinglong Gao, Xiao Ding, Bing Qin, Ting Liu
Causal reasoning ability is crucial for numerous NLP applications.
no code implementations • 9 May 2023 • Bo Sun, Baoxin Wang, YiXuan Wang, Wanxiang Che, Dayong Wu, Shijin Wang, Ting Liu
Our experiments show that powerful pre-trained models perform poorly on this corpus.
no code implementations • 25 Apr 2023 • Haoyu Chu, Shikui Wei, Ting Liu, Yao Zhao, Yuto Miyatake
Deep equilibrium (DEQ) models have emerged as a promising class of implicit layer models, which abandon traditional depth by solving for the fixed points of a single nonlinear layer.
1 code implementation • 19 Apr 2023 • Suncheng Xiang, Jingsheng Gao, Mengyuan Guan, Jiacheng Ruan, Chengfeng Zhou, Ting Liu, Dahong Qian, Yuzhuo Fu
In this paper, we propose a Multi-Modal Equivalent Transformer called MMET for more robust visual-semantic embedding learning on visual, textual and visual-textual tasks respectively.
Generalizable Person Re-identification Representation Learning
1 code implementation • 14 Apr 2023 • Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, Ting Liu
Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks.
1 code implementation • ICCV 2023 • Boyang Li, Yingqian Wang, Longguang Wang, Fei Zhang, Ting Liu, Zaiping Lin, Wei An, Yulan Guo
The core idea of this work is to recover the per-pixel mask of each target from the given single point label by using clustering approaches, which looks simple but is indeed challenging since targets are always insalient and accompanied with background clutters.
no code implementations • 28 Mar 2023 • Yuanhao Xiong, Long Zhao, Boqing Gong, Ming-Hsuan Yang, Florian Schroff, Ting Liu, Cho-Jui Hsieh, Liangzhe Yuan
Existing video-language pre-training methods primarily focus on instance-level alignment between video clips and captions via global contrastive learning but neglect rich fine-grained local information in both videos and text, which is of importance to downstream tasks requiring temporal localization and semantic reasoning.
1 code implementation • ICCV 2023 • Long Zhao, Liangzhe Yuan, Boqing Gong, Yin Cui, Florian Schroff, Ming-Hsuan Yang, Hartwig Adam, Ting Liu
To address this challenge, we propose UniVRD, a novel bottom-up method for Unified Visual Relationship Detection by leveraging vision and language models (VLMs).
Human-Object Interaction Detection Relationship Detection +2
2 code implementations • 16 Mar 2023 • Zhuowei Li, Long Zhao, Zizhao Zhang, Han Zhang, Di Liu, Ting Liu, Dimitris N. Metaxas
In the context of continual learning, prototypes-as representative class embeddings-offer advantages in memory conservation and the mitigation of catastrophic forgetting.
1 code implementation • 16 Feb 2023 • Jingsheng Gao, Zeyu Li, Suncheng Xiang, Ting Liu, Yuzhuo Fu
A huge number of multi-participant dialogues happen online every day, which leads to difficulty in understanding the nature of dialogue dynamics for both humans and machines.
no code implementations • 25 Jan 2023 • Jiali Wei, Ming Fan, Wenjing Jiao, Wuxia Jin, Ting Liu
We also make the first attempt to defend against the latest style-level backdoor attacks.
1 code implementation • CVPR 2023 • Zheng Xu, Maxwell Collins, Yuxiao Wang, Liviu Panait, Sewoong Oh, Sean Augenstein, Ting Liu, Florian Schroff, H. Brendan McMahan
Small on-device models have been successfully trained with user-level differential privacy (DP) for next word prediction and image classification tasks in the past.
1 code implementation • 10 Nov 2022 • Yiming Cui, Wanxiang Che, Shijin Wang, Ting Liu
We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy.
Ranked #6 on Stock Market Prediction on Astock
no code implementations • 9 Nov 2022 • Tyler R. Scott, Ting Liu, Michael C. Mozer, Andrew C. Gallagher
Recent research in clustering face embeddings has found that unsupervised, shallow, heuristic-based methods -- including $k$-means and hierarchical agglomerative clustering -- underperform supervised, deep, inductive methods.
1 code implementation • 3 Nov 2022 • Jiacheng Ruan, Suncheng Xiang, Mingye Xie, Ting Liu, Yuzhuo Fu
To address this challenge, we propose a light-weight model to achieve competitive performances for skin lesion segmentation at the lowest cost of parameters and computational complexity so far.
1 code implementation • 2 Nov 2022 • Suncheng Xiang, Hao Chen, Wei Ran, Zefang Yu, Ting Liu, Dahong Qian, Yuzhuo Fu
Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance.
Domain Generalization Generalizable Person Re-identification +2
1 code implementation • 25 Oct 2022 • Jiacheng Ruan, Mingye Xie, Suncheng Xiang, Ting Liu, Yuzhuo Fu
Specifically, our block performs a Fourier transform on the three axes of the input feature and assigns the external weight in the frequency domain, which is generated by our Weights Generator.
1 code implementation • 28 Sep 2022 • Tianhao Wu, Boyang Li, Yihang Luo, Yingqian Wang, Chao Xiao, Ting Liu, Jungang Yang, Wei An, Yulan Guo
Due to the extremely large image coverage area (e. g., thousands square kilometers), candidate targets in these images are much smaller, dimer, more changeable than those targets observed by aerial-based and land-based imaging devices.
1 code implementation • COLING 2022 • Haochun Wang, Chi Liu, Nuwa Xi, Sendong Zhao, Meizhi Ju, Shiwei Zhang, Ziheng Zhang, Yefeng Zheng, Bing Qin, Ting Liu
Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain.
no code implementations • 21 Aug 2022 • Tingting Wu, Xiao Ding, Hao Zhang, Jinglong Gao, Li Du, Bing Qin, Ting Liu
To relieve this issue, curriculum learning is proposed to improve model performance and generalization by ordering training samples in a meaningful (e. g., easy to hard) sequence.
1 code implementation • COLING 2022 • Longxuan Ma, Ziyu Zhuang, Weinan Zhang, Mingda Li, Ting Liu
This paper introduces a novel Self-supervised Fine-grained Dialogue Evaluation framework (SelF-Eval).
no code implementations • 14 Aug 2022 • Bowen Chen, Xiao Ding, Li Du, Qin Bing, Ting Liu
Given a task, human learns from easy to hard, whereas the model learns randomly.
no code implementations • 2 Jun 2022 • Jiazhou Wang, Jue Tian, Yang Liu, Xiaohong Guan, Dong Yang, Ting Liu
We prove that a designed MMTD can significantly improve the detection capability compared to existing one-stage MTDs.
no code implementations • Findings (ACL) 2022 • Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin
To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process.
1 code implementation • 16 May 2022 • Ming Fan, Wenying Wei, Wuxia Jin, Zijiang Yang, Ting Liu
ExpGA employs the explanation results generated by interpretable methods to collect high-quality initial seeds, which are prone to derive discriminatory samples by slightly modifying feature values.
no code implementations • 15 Apr 2022 • Bo Sun, Baoxin Wang, Wanxiang Che, Dayong Wu, Zhigang Chen, Ting Liu
These errors have been studied extensively and are relatively simple for humans.
1 code implementation • ICLR 2022 • Juntang Zhuang, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha Dvornek, Sekhar Tatikonda, James Duncan, Ting Liu
Instead, we define a \textit{surrogate gap}, a measure equivalent to the dominant eigenvalue of Hessian at a local minimum when the radius of the neighborhood (to derive the perturbed loss) is small.
1 code implementation • 14 Mar 2022 • Yiming Cui, Ziqing Yang, Ting Liu
We permute a proportion of the input text, and the training objective is to predict the position of the original token.
Ranked #4 on Stock Market Prediction on Astock
no code implementations • ACL 2022 • Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, Furu Wei
We first evaluate CLIP's zero-shot performance on a typical visual question answering task and demonstrate a zero-shot cross-modality transfer capability of CLIP on the visual entailment task.
2 code implementations • 20 Jan 2022 • Qi Shi, Qian Liu, Bei Chen, Yu Zhang, Ting Liu, Jian-Guang Lou
In this work, we propose LEMON, a general framework for language-based environment manipulation tasks.
no code implementations • 19 Jan 2022 • Yanzhen Ren, Ting Liu, Liming Zhai, Lina Wang
Deep image steganography is a data hiding technology that conceal data in digital images via deep neural networks.
no code implementations • 14 Jan 2022 • Gongjin Lan, Ting Liu, Xu Wang, Xueli Pan, Zhisheng Huang
In this paper, we propose an SW technology index to standardize the development for ensuring that the work of SW technology is designed well and to quantitatively evaluate the quality of the work in SW technology.
no code implementations • CVPR 2022 • Yihan Zeng, Da Zhang, Chunwei Wang, Zhenwei Miao, Ting Liu, Xin Zhan, Dayang Hao, Chao Ma
LiDAR and camera are two common sensors to collect data in time for 3D object detection under the autonomous driving context.
no code implementations • CVPR 2022 • Yifan Wang, Wenbo Zhang, Lijun Wang, Ting Liu, Huchuan Lu
We design an Uncertainty Mining Network (UMNet) which consists of multiple Merge-and-Split (MS) modules to recursively analyze the commonality and difference among multiple noisy labels and infer pixel-wise uncertainty map for each label.
no code implementations • 22 Dec 2021 • Jingxiao Zheng, Xinwei Shi, Alexander Gorban, Junhua Mao, Yang song, Charles R. Qi, Ting Liu, Visesh Chari, Andre Cornman, Yin Zhou, CongCong Li, Dragomir Anguelov
3D human pose estimation (HPE) in autonomous vehicles (AV) differs from other use cases in many factors, including the 3D resolution and range of data, absence of dense depth maps, failure modes for LiDAR, relative location between the camera and LiDAR, and a high bar for estimation accuracy.
1 code implementation • 11 Dec 2021 • Honglu Zhou, Asim Kadav, Aviv Shamsian, Shijie Geng, Farley Lai, Long Zhao, Ting Liu, Mubbasir Kapadia, Hans Peter Graf
Group Activity Recognition detects the activity collectively performed by a group of actors, which requires compositional reasoning of actors and objects.
Ranked #2 on Group Activity Recognition on Collective Activity
1 code implementation • CVPR 2022 • Liangzhe Yuan, Rui Qian, Yin Cui, Boqing Gong, Florian Schroff, Ming-Hsuan Yang, Hartwig Adam, Ting Liu
Modern self-supervised learning algorithms typically enforce persistency of instance representations across views.
no code implementations • 8 Dec 2021 • Rui Qian, Yeqing Li, Liangzhe Yuan, Boqing Gong, Ting Liu, Matthew Brown, Serge Belongie, Ming-Hsuan Yang, Hartwig Adam, Yin Cui
The training objective consists of two parts: a fine-grained temporal learning objective to maximize the similarity between corresponding temporal embeddings in the short clip and the long clip, and a persistent temporal learning objective to pull together global embeddings of the two clips.
no code implementations • 13 Oct 2021 • Ting Liu, Wenwu Wang, Xiaofei Zhang, Zhenyin Gong, Yina Guo
Single channel blind source separation (SCBSS) refers to separate multiple sources from a mixed signal collected by a single sensor.
1 code implementation • 11 Oct 2021 • Suncheng Xiang, Jingsheng Gao, Zirui Zhang, Mengyuan Guan, Binjie Yan, Ting Liu, Dahong Qian, Yuzhuo Fu
Pretraining is a dominant paradigm in computer vision.
1 code implementation • 22 Sep 2021 • Suncheng Xiang, Guanjie You, Mengyuan Guan, Hao Chen, Binjie Yan, Ting Liu, Yuzhuo Fu
Moreover, aiming to fully exploit the potential of FineGPR and promote the efficient training from millions of synthetic data, we propose an attribute analysis pipeline called AOST, which dynamically learns attribute distribution in real domain, then eliminates the gap between synthetic and real-world data and thus is freely deployed to new scenarios.
2 code implementations • EMNLP 2021 • Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei
We find that many languages are under-represented in recent cross-lingual language models due to the limited vocabulary capacity.
1 code implementation • EMNLP 2021 • Qi Shi, Yu Zhang, Qingyu Yin, Ting Liu
Specifically, we first retrieve logic-level program-like evidence from the given table and statement as supplementary evidence for the table.
no code implementations • 26 Aug 2021 • Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhigang Chen, Shijin Wang
Achieving human-level performance on some of the Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs).
no code implementations • ACL 2021 • Wei Song, Shuhui Zhou, Ruiji Fu, Ting Liu, Lizhen Liu
Correct natural language understanding requires computers to distinguish the literal and metaphorical senses of a word.
1 code implementation • ACL 2021 • Li Du, Xiao Ding, Ting Liu, Bing Qin
Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering.
no code implementations • ACL 2021 • Jiefu Gong, Xiao Hu, Wei Song, Ruiji Fu, Zhichao Sheng, Bo Zhu, Shijin Wang, Ting Liu
IFlyEA provides multi-level and multi-dimension analytical modules for essay assessment.
no code implementations • ACL 2021 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style.
no code implementations • ACL 2021 • Jiaqi Guo, Ziliang Si, Yu Wang, Qian Liu, Ming Fan, Jian-Guang Lou, Zijiang Yang, Ting Liu
However, we identify two biases in existing datasets for XDTS: (1) a high proportion of context-independent questions and (2) a high proportion of easy SQL queries.
1 code implementation • ACL 2021 • Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin
ExCAR first acquires additional evidence information from a large-scale causal event graph as logical rules for causal reasoning.
no code implementations • 21 Jul 2021 • Zhongyang Li, Xiao Ding, Kuo Liao, Bing Qin, Ting Liu
Recent work has shown success in incorporating pre-trained models like BERT to improve NLP systems.
no code implementations • 21 Jul 2021 • Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, Benjamin Van Durme
We present a conditional text generation framework that posits sentential expressions of possible causes and effects.
1 code implementation • ACL 2021 • Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei
Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others.
1 code implementation • ACL 2021 • Haoyu Song, Yan Wang, Kaiyan Zhang, Wei-Nan Zhang, Ting Liu
Maintaining consistent personas is essential for dialogue agents.
no code implementations • Joint Conference on Lexical and Computational Semantics 2021 • Ziqing Yang, Yiming Cui, Chenglei Si, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks.
1 code implementation • ACL 2021 • Libo Qin, Fuxuan Wei, Tianbao Xie, Xiao Xu, Wanxiang Che, Ting Liu
Multi-intent SLU can handle multiple intents in an utterance, which has attracted increasing attention.
Ranked #1 on Semantic Frame Parsing on MixATIS (Overall Accuracy metric)
1 code implementation • 31 May 2021 • Ting Liu, Jungang Yang, Boyang Li, Chao Xiao, Yang Sun, Yingqian Wang, Wei An
Considering that different singular values have different importance and should be treated discriminatively, in this paper, we propose a non-convex tensor low-rank approximation (NTLA) method for infrared small target detection.
1 code implementation • ACL 2021 • Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu
Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.
no code implementations • Findings (ACL) 2021 • Yutai Hou, Yongkui Lai, Cheng Chen, Wanxiang Che, Ting Liu
However, dialogue language understanding contains two closely related tasks, i. e., intent detection and slot filling, and often benefits from jointly learning the two tasks.
1 code implementation • 10 May 2021 • Yiming Cui, Ting Liu, Wanxiang Che, Zhigang Chen, Shijin Wang
Achieving human-level performance on some of Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs).
Ranked #1 on Multi-Choice MRC on ExpMRC - RACE+ (test)
no code implementations • 26 Apr 2021 • Jiaqi Li, Ming Liu, Zihao Zheng, Heng Zhang, Bing Qin, Min-Yen Kan, Ting Liu
Multiparty Dialogue Machine Reading Comprehension (MRC) differs from traditional MRC as models must handle the complex dialogue discourse structure, previously unconsidered in traditional MRC.
Ranked #4 on Question Answering on Molweni
no code implementations • 17 Apr 2021 • Jianhua Yuan, Yanyan Zhao, Bing Qin, Ting Liu
To this end, we propose the BertMasker network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations.
General Classification Multi-Domain Sentiment Classification +3
1 code implementation • 6 Apr 2021 • Suncheng Xiang, Yuzhuo Fu, Mengyuan Guan, Ting Liu
Employing clustering strategy to assign unlabeled target images with pseudo labels has become a trend for person re-identification (re-ID) algorithms in domain adaptation.
1 code implementation • 4 Mar 2021 • Libo Qin, Tianbao Xie, Wanxiang Che, Ting Liu
Spoken Language Understanding (SLU) aims to extract the semantics frame of user queries, which is a core component in a task-oriented dialog system.
no code implementations • 7 Feb 2021 • Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
To deal with this challenge, most of the existing works consider paragraphs as nodes in a graph and propose graph-based methods to retrieve them.
no code implementations • 31 Dec 2020 • Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
Learning interpretable dialog structure from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation.
no code implementations • 29 Dec 2020 • Le Qi, Yu Zhang, Qingyu Yin, Ting Liu
Self attention networks (SANs) have been widely utilized in recent NLP studies.
1 code implementation • 24 Dec 2020 • Libo Qin, Zhouyang Li, Wanxiang Che, Minheng Ni, Ting Liu
The dialog context information (contextual information) and the mutual interaction information are two key factors that contribute to the two related tasks.
1 code implementation • 13 Dec 2020 • Yutai Hou, Sanyuan Chen, Wanxiang Che, Cheng Chen, Ting Liu
Slot filling, a fundamental module of spoken language understanding, often suffers from insufficient quantity and diversity of training data.
1 code implementation • CVPR 2021 • Long Zhao, Yuxiao Wang, Jiaping Zhao, Liangzhe Yuan, Jennifer J. Sun, Florian Schroff, Hartwig Adam, Xi Peng, Dimitris Metaxas, Ting Liu
To evaluate the power of the learned representations, in addition to the conventional fully-supervised action recognition settings, we introduce a novel task called single-shot cross-view action recognition.
no code implementations • 2 Dec 2020 • Sendong Zhao, Bing Qin, Ting Liu, Fei Wang
This paper proposes a method BioGRER to improve the BioKG's quality, which comprehensively combines the knowledge graph embedding and logic rules that support and negate triplets in the BioKG.
1 code implementation • COLING 2020 • Heng Gong, Yawei Sun, Xiaocheng Feng, Bing Qin, Wei Bi, Xiaojiang Liu, Ting Liu
Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.
no code implementations • COLING 2020 • Qi Shi, Yu Zhang, Qingyu Yin, Ting Liu
Table-based fact verification is expected to perform both linguistic reasoning and symbolic reasoning.
no code implementations • SEMEVAL 2020 • Xiao Ding, Dingkui Hao, Yuewei Zhang, Kuo Liao, Zhongyang Li, Bing Qin, Ting Liu
In this task, we dedicate to detecting causation, especially counterfactuals from texts.
no code implementations • 13 Nov 2020 • Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
With the blooming of various Pre-trained Language Models (PLMs), Machine Reading Comprehension (MRC) has embraced significant improvements on various benchmarks and even surpass human performances.
1 code implementation • COLING 2020 • Wentao Ma, Yiming Cui, Chenglei Si, Ting Liu, Shijin Wang, Guoping Hu
Most pre-trained language models (PLMs) construct word representations at subword level with Byte-Pair Encoding (BPE) or its variations, by which OOV (out-of-vocab) words are almost avoidable.
no code implementations • CONLL 2020 • Longxu Dou, Yunlong Feng, Yuqiu Ji, Wanxiang Che, Ting Liu
This paper describes our submission system (HIT-SCIR) for the CoNLL 2020 shared task: Cross-Framework and Cross-Lingual Meaning Representation Parsing.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Heng Gong, Wei Bi, Xiaocheng Feng, Bing Qin, Xiaojiang Liu, Ting Liu
Neural table-to-text models, which select and order salient data, as well as verbalizing them fluently via surface realization, have achieved promising progress.
1 code implementation • EMNLP 2020 • Shaolei Wang, Zhongyuan Wang, Wanxiang Che, Ting Liu
Most existing approaches to disfluency detection heavily rely on human-annotated corpora, which is expensive to obtain in practice.
2 code implementations • 23 Oct 2020 • Ting Liu, Jennifer J. Sun, Long Zhao, Jiaping Zhao, Liangzhe Yuan, Yuxiao Wang, Liang-Chieh Chen, Florian Schroff, Hartwig Adam
Recognition of human poses and actions is crucial for autonomous systems to interact smoothly with people.
1 code implementation • CCL 2021 • Xiachong Feng, Xiaocheng Feng, Bing Qin, Ting Liu
In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.
no code implementations • 17 Oct 2020 • Yunchao Wei, Shuai Zheng, Ming-Ming Cheng, Hang Zhao, LiWei Wang, Errui Ding, Yi Yang, Antonio Torralba, Ting Liu, Guolei Sun, Wenguan Wang, Luc van Gool, Wonho Bae, Junhyug Noh, Jinhwan Seo, Gunhee Kim, Hao Zhao, Ming Lu, Anbang Yao, Yiwen Guo, Yurong Chen, Li Zhang, Chuangchuang Tan, Tao Ruan, Guanghua Gu, Shikui Wei, Yao Zhao, Mariia Dobko, Ostap Viniavskyi, Oles Dobosevych, Zhendong Wang, Zhenyuan Chen, Chen Gong, Huanqing Yan, Jun He
The purpose of the Learning from Imperfect Data (LID) workshop is to inspire and facilitate the research in developing novel approaches that would harness the imperfect data and improve the data-efficiency during training.
1 code implementation • NeurIPS 2020 • Long Zhao, Ting Liu, Xi Peng, Dimitris Metaxas
In this paper, we propose a novel and effective regularization term for adversarial data augmentation.
no code implementations • 15 Oct 2020 • Suncheng Xiang, Yuzhuo Fu, Guanjie You, Ting Liu
Person re-identification (re-ID) plays an important role in applications such as public security and video surveillance.
no code implementations • 11 Oct 2020 • Yutai Hou, Yongkui Lai, Yushan Wu, Wanxiang Che, Ting Liu
In this paper, we study the few-shot multi-label classification for user intent detection.
1 code implementation • 8 Oct 2020 • Libo Qin, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, Ting Liu
Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks.
1 code implementation • 8 Oct 2020 • Dechuan Teng, Libo Qin, Wanxiang Che, Sendong Zhao, Ting Liu
In this paper, we improve Chinese spoken language understanding (SLU) by injecting word information.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Longxuan Ma, Wei-Nan Zhang, Runxin Sun, Ting Liu
Unstructured documents serving as external knowledge of the dialogues help to generate more informative responses.
no code implementations • 1 Oct 2020 • Shaolei Wang, Baoxin Wang, Jiefu Gong, Zhongyuan Wang, Xiao Hu, Xingyi Duan, Zizhuo Shen, Gang Yue, Ruiji Fu, Dayong Wu, Wanxiang Che, Shijin Wang, Guoping Hu, Ting Liu
Grammatical error diagnosis is an important task in natural language processing.
1 code implementation • EMNLP (ACL) 2021 • Wanxiang Che, Yunlong Feng, Libo Qin, Ting Liu
We introduce \texttt{N-LTP}, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: {lexical analysis} (Chinese word segmentation, part-of-speech tagging, and named entity recognition), {syntactic parsing} (dependency parsing), and {semantic parsing} (semantic dependency parsing and semantic role labeling).
1 code implementation • EMNLP 2020 • Haoyu Song, Yan Wang, Wei-Nan Zhang, Zhengyu Zhao, Ting Liu, Xiaojiang Liu
Maintaining a consistent attribute profile is crucial for dialogue agents to naturally converse with humans.
3 code implementations • 17 Sep 2020 • Yutai Hou, Jiafeng Mao, Yongkui Lai, Cheng Chen, Wanxiang Che, Zhigang Chen, Ting Liu
In this paper, we present FewJoint, a novel Few-Shot Learning benchmark for NLP.
no code implementations • 16 Aug 2020 • Libo Qin, Wanxiang Che, Yangming Li, Minheng Ni, Ting Liu
In dialog system, dialog act recognition and sentiment classification are two correlative tasks to capture speakers intentions, where dialog act and sentiment can indicate the explicit and the implicit intentions separately.
no code implementations • 13 Aug 2020 • Ming Fan, Wenying Wei, Xiaofei Xie, Yang Liu, Xiaohong Guan, Ting Liu
For this reason, a variety of explanation approaches are proposed to interpret predictions by providing important features.
Cryptography and Security Software Engineering
no code implementations • ACL 2020 • Jun Xu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
To address the challenge of policy learning in open-domain multi-turn conversation, we propose to represent prior information about dialog transitions as a graph and learn a graph grounded dialog policy, aimed at fostering a more coherent and controllable dialog.
no code implementations • ACL 2020 • Yangming Li, Kaisheng Yao, Libo Qin, Wanxiang Che, Xiaolong Li, Ting Liu
Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG).
1 code implementation • 26 Jun 2020 • Kaidi Jin, Tianwei Zhang, Chao Shen, Yufei Chen, Ming Fan, Chenhao Lin, Ting Liu
It is unknown whether there are any connections and common characteristics between the defenses against these two attacks.
no code implementations • 17 Jun 2020 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Noun phrases and relational phrases in Open Knowledge Bases are often not canonical, leading to redundant and ambiguous facts.
no code implementations • 12 Jun 2020 • Suncheng Xiang, Yuzhuo Fu, Guanjie You, Ting Liu
To address this problem, firstly, we develop a large-scale synthetic data engine, the salient characteristic of this engine is controllable.
2 code implementations • ACL 2020 • Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, Ting Liu
In this paper, we explore the slot tagging with only a few labeled support sentences (a. k. a.
1 code implementation • ACL 2020 • Bo Zheng, Haoyang Wen, Yaobo Liang, Nan Duan, Wanxiang Che, Daxin Jiang, Ming Zhou, Ting Liu
Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer).
2 code implementations • ACL 2020 • Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
We propose a new task of conversational recommendation over multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e. g., QA) to a recommendation dialog, taking into account user's interests and feedback.
1 code implementation • ACL 2020 • Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu
Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.
no code implementations • 30 Apr 2020 • Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che, Yangming Li, Ting Liu
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
1 code implementation • Findings (ACL) 2021 • Chenglei Si, Ziqing Yang, Yiming Cui, Wentao Ma, Ting Liu, Shijin Wang
To fill this important gap, we construct AdvRACE (Adversarial RACE), a new model-agnostic benchmark for evaluating the robustness of MRC models under four different types of adversarial attacks, including our novel distractor extraction and generation attacks.
no code implementations • 29 Apr 2020 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses.
6 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models.
Ranked #13 on Stock Market Prediction on Astock
1 code implementation • ACL 2020 • Wentao Ma, Yiming Cui, Ting Liu, Dong Wang, Shijin Wang, Guoping Hu
Human conversations contain many types of information, e. g., knowledge, common sense, and language habits.
1 code implementation • EMNLP 2020 • Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, Xiangzhan Yu
Deep pretrained language models have achieved great success in the way of pretraining first and then fine-tuning.
1 code implementation • ACL 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, Ting Liu
However, there has been relatively little research on how to effectively use data from all domains to improve the performance of each domain and also unseen domains.
Ranked #1 on Task-Oriented Dialogue Systems on Kvret
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Ting Liu
Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information, making a fine-grained intent information integration for the token-level slot prediction.
no code implementations • 17 Apr 2020 • Longxuan Ma, Wei-Nan Zhang, Mingda Li, Ting Liu
We believe that extracting unstructured document(s) information is the future trend of the DS because a great amount of human knowledge lies in these document(s).
no code implementations • ACL 2020 • Haoyu Song, Yan Wang, Wei-Nan Zhang, Xiaojiang Liu, Ting Liu
Maintaining a consistent personality in conversations is quite natural for human beings, but is still a non-trivial task for machines.
1 code implementation • COLING 2020 • Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, Bing Qin
Research into the area of multiparty dialog has grown considerably over recent years.
Ranked #7 on Discourse Parsing on Molweni
no code implementations • EMNLP 2020 • Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
We construct a strong baseline model to establish that, with the proper use of pre-trained models, graph structure may not be necessary for multi-hop question answering.
1 code implementation • COLING 2020 • Yiming Cui, Ting Liu, Ziqing Yang, Zhipeng Chen, Wentao Ma, Wanxiang Che, Shijin Wang, Guoping Hu
To add diversity in this area, in this paper, we propose a new task called Sentence Cloze-style Machine Reading Comprehension (SC-MRC).
1 code implementation • ACL 2020 • Ziqing Yang, Yiming Cui, Zhipeng Chen, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
In this paper, we introduce TextBrewer, an open-source knowledge distillation toolkit designed for natural language processing.
1 code implementation • 24 Feb 2020 • Xiaocheng Feng, Yawei Sun, Bing Qin, Heng Gong, Yibo Sun, Wei Bi, Xiaojiang Liu, Ting Liu
In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content.
8 code implementations • Findings of the Association for Computational Linguistics 2020 • Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou
Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks.
Ranked #1 on Code Documentation Generation on CodeSearchNet - Go
1 code implementation • 15 Jan 2020 • Jennifer J. Sun, Ting Liu, Alan S. Cowen, Florian Schroff, Hartwig Adam, Gautam Prasad
The ability to predict evoked affect from a video, before viewers watch the video, can help in content creation and video recommendation.
no code implementations • 19 Dec 2019 • Xingyi Duan, Baoxin Wang, Ziyue Wang, Wentao Ma, Yiming Cui, Dayong Wu, Shijin Wang, Ting Liu, Tianxiang Huo, Zhen Hu, Heng Wang, Zhiyuan Liu
We present a Chinese judicial reading comprehension (CJRC) dataset which contains approximately 10K documents and almost 50K questions with answers.
no code implementations • 19 Dec 2019 • Yiming Cui, Wanxiang Che, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu
Story Ending Prediction is a task that needs to select an appropriate ending for the given story, which requires the machine to understand the story and sometimes needs commonsense knowledge.
2 code implementations • ECCV 2020 • Jennifer J. Sun, Jiaping Zhao, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Ting Liu
Depictions of similar human body configurations can vary with changing viewpoints.
Ranked #1 on Pose Retrieval on MPI-INF-3DHP
no code implementations • 27 Nov 2019 • Jennifer J. Sun, Ting Liu, Gautam Prasad
Towards a better understanding of viewer impact, we present our methods for the MediaEval 2018 Emotional Impact of Movies Task to predict the expected valence and arousal continuously in movies.
9 code implementations • CVPR 2020 • Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen
In this work, we introduce Panoptic-DeepLab, a simple, strong, and fast system for panoptic segmentation, aiming to establish a solid baseline for bottom-up methods that can achieve comparable performance of two-stage methods while yielding fast inference speed.
Ranked #6 on Panoptic Segmentation on Cityscapes test (using extra training data)
no code implementations • 14 Nov 2019 • Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu
Recurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks.
1 code implementation • 14 Nov 2019 • Haoyu Song, Wei-Nan Zhang, Jingwen Hu, Ting Liu
Consistency is one of the major challenges faced by dialogue agents.
no code implementations • 9 Nov 2019 • Ziqing Yang, Yiming Cui, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
With virtual adversarial training (VAT), we explore the possibility of improving the RC models with semi-supervised learning and prove that examples from a different task are also beneficial.
no code implementations • 8 Nov 2019 • Haichao Zhu, Li Dong, Furu Wei, Bing Qin, Ting Liu
The limited size of existing query-focused summarization datasets renders training data-driven summarization models challenging.
no code implementations • 8 Nov 2019 • Jiaqi Li, Ming Liu, Bing Qin, Zihao Zheng, Ting Liu
In this paper, we propose the scheme for annotating large-scale multi-party chat dialogues for discourse parsing and machine comprehension.
no code implementations • IJCNLP 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh Chawla, Meng Jiang
In this work, we propose a new sequence labeling framework (as well as a new tag schema) to jointly extract the fact and condition tuples from statement sentences.
no code implementations • IJCNLP 2019 • Ziyue Wang, Baoxin Wang, Xingyi Duan, Dayong Wu, Shijin Wang, Guoping Hu, Ting Liu
To our knowledge, IFlyLegal is the first Chinese legal system that employs up-to-date NLP techniques and caters for needs of different user groups, such as lawyers, judges, procurators, and clients.
no code implementations • CONLL 2019 • Wanxiang Che, Longxu Dou, Yang Xu, Yuxuan Wang, Yijia Liu, Ting Liu
This paper describes our system (HIT-SCIR) for CoNLL 2019 shared task: Cross-Framework Meaning Representation Parsing.
Ranked #1 on UCCA Parsing on CoNLL 2019
2 code implementations • 10 Oct 2019 • Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen
The semantic segmentation branch is the same as the typical design of any semantic segmentation model (e. g., DeepLab), while the instance segmentation branch is class-agnostic, involving a simple instance center regression.
no code implementations • CONLL 2019 • Wentao Ma, Yiming Cui, Nan Shao, Su He, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu
The heart of TripleNet is a novel attention mechanism named triple attention to model the relationships within the triple at four levels.
no code implementations • IJCNLP 2019 • Li Du, Xiao Ding, Ting Liu, Zhongyang Li
Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP).
1 code implementation • IJCNLP 2019 • Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, Ting Liu
In this approach, a linear transformation is learned from contextual word alignments to align the contextualized embeddings independently trained in different languages.
1 code implementation • IJCNLP 2019 • Libo Qin, Yijia Liu, Wanxiang Che, Haoyang Wen, Yangming Li, Ting Liu
Querying the knowledge base (KB) has long been a challenge in the end-to-end task-oriented dialogue system.
Ranked #6 on Task-Oriented Dialogue Systems on KVRET