1 code implementation • 20 Mar 2024 • Tongtian Yue, Jie Cheng, Longteng Guo, Xingyuan Dai, Zijia Zhao, Xingjian He, Gang Xiong, Yisheng Lv, Jing Liu
In this paper, we present and delve into the self-consistency capability of LVLMs, a crucial aspect that reflects the models' ability to both generate informative captions for specific objects and subsequently utilize these captions to accurately re-identify the objects in a closed-loop process.
1 code implementation • 27 Feb 2024 • Jie Cheng, Gang Xiong, Xingyuan Dai, Qinghai Miao, Yisheng Lv, Fei-Yue Wang
Our experiments on robotic manipulation and locomotion tasks demonstrate that RIME significantly enhances the robustness of the state-of-the-art PbRL method.
no code implementations • 22 Dec 2023 • Yizhen Jia, Hui Chen, Wen-Qin Wang, Jie Cheng
To overcome this problem and ensure robust beamforming for FCA, deviations in array control parameters (ACPs) and array perturbations, the effect of mutual coupling in addition to looking-direction errors should be considered.
no code implementations • 19 Sep 2023 • Jie Cheng, Yingbing Chen, Xiaodong Mei, Bowen Yang, Bo Li, Ming Liu
In recent years, imitation-based driving planners have reported considerable success.
no code implementations • ICCV 2023 • Guiqin Wang, Peng Zhao, Cong Zhao, Shusen Yang, Jie Cheng, Luziwei Leng, Jianxing Liao, Qinghai Guo
To address this problem, we propose a novel attention-based hierarchically-structured latent model to learn the temporal variations of feature semantics.
2 code implementations • ICCV 2023 • Jie Cheng, Xiaodong Mei, Ming Liu
This study explores the application of self-supervised learning (SSL) to the task of motion forecasting, an area that has not yet been extensively investigated despite the widespread success of SSL in computer vision and natural language processing.
no code implementations • 24 Apr 2023 • Rui Zhang, Luziwei Leng, Kaiwei Che, Hu Zhang, Jie Cheng, Qinghai Guo, Jiangxing Liao, Ran Cheng
Leveraging the low-power, event-driven computation and the inherent temporal dynamics, spiking neural networks (SNNs) are potentially ideal solutions for processing dynamic and asynchronous signals from event-based sensors.
1 code implementation • 21 Mar 2023 • Saizhe Ding, Jinze Chen, Yang Wang, Yu Kang, Weiguo Song, Jie Cheng, Yang Cao
Event cameras, such as dynamic vision sensors (DVS), are biologically inspired vision sensors that have advanced over conventional cameras in high dynamic range, low latency and low power consumption, showing great application potential in many fields.
no code implementations • CVPR 2023 • Yushun Tang, Ce Zhang, Heng Xu, Shuoshuo Chen, Jie Cheng, Luziwei Leng, Qinghai Guo, Zhihai He
We observe that the performance of this feed-forward Hebbian learning for fully test-time adaptation can be significantly improved by incorporating a feedback neuro-modulation layer.
no code implementations • CVPR 2023 • Xinyuan Gao, Yuhang He, Songlin Dong, Jie Cheng, Xing Wei, Yihong Gong
Deep neural networks suffer from catastrophic forgetting in class incremental learning, where the classification accuracy of old classes drastically deteriorates when the networks learn the knowledge of new classes.
1 code implementation • 26 Apr 2022 • Peng Tao, Xiaohu Hao, Jie Cheng, Luonan Chen
Making an accurate prediction of an unknown system only from a short-term time series is difficult due to the lack of sufficient information, especially in a multi-step-ahead manner.
1 code implementation • CVPR 2022 • Kaixuan Zhang, Kaiwei Che, JianGuo Zhang, Jie Cheng, Ziyang Zhang, Qinghai Guo, Luziwei Leng
Inspired by continuous dynamics of biological neuron models, we propose a novel encoding method for sparse events - continuous time convolution (CTC) - which learns to model the spatial feature of the data with intrinsic dynamics.
1 code implementation • CVPR 2021 • Kai Zhu, Yang Cao, Wei Zhai, Jie Cheng, Zheng-Jun Zha
Few-shot class-incremental learning is to recognize the new classes given few samples and not forget the old classes.
no code implementations • 5 Apr 2021 • Dong He, Jie Cheng, Jong-Hwan Kim
This paper proposes the GSECnet - Ground Segmentation network for Edge Computing, an efficient ground segmentation framework of point clouds specifically designed to be deployable on a low-power edge computing unit.
no code implementations • 16 Apr 2020 • Tianyu Liu, Qinghai Liao, Lu Gan, Fulong Ma, Jie Cheng, Xupeng Xie, Zhe Wang, Yingbing Chen, Yilong Zhu, Shuyang Zhang, Zhengyong Chen, Yang Liu, Meng Xie, Yang Yu, Zitong Guo, Guang Li, Peidong Yuan, Dong Han, Yuying Chen, Haoyang Ye, Jianhao Jiao, Peng Yun, Zhenhua Xu, Hengli Wang, Huaiyang Huang, Sukai Wang, Peide Cai, Yuxiang Sun, Yandong Liu, Lujia Wang, Ming Liu
Moreover, many countries have imposed tough lockdown measures to reduce the virus transmission (e. g., retail, catering) during the pandemic, which causes inconveniences for human daily life.
no code implementations • 3 Jul 2015 • Zhao Kang, Chong Peng, Jie Cheng, Qiang Chen
Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator.
no code implementations • 9 Apr 2013 • Jie Cheng, Tianxi Li, Elizaveta Levina, Ji Zhu
While graphical models for continuous data (Gaussian graphical models) and discrete data (Ising models) have been extensively studied, there is little work on graphical models linking both continuous and discrete variables (mixed data), which are common in many scientific applications.