no code implementations • 5 Jun 2024 • Dominik Scheuble, Chenyang Lei, Seung-Hwan Baek, Mario Bijelic, Felix Heide
We leverage polarimetric wavefronts to estimate normals, distance, and material properties in outdoor scenarios with a novel learned reconstruction method.
1 code implementation • 3 Jun 2024 • Xiao Chen, Xudong Jiang, Yunkang Tao, Zhen Lei, Qing Li, Chenyang Lei, Zhaoxiang Zhang
However, incorporating the raw user guidance naively into the existing reflection removal network does not result in performance gains.
no code implementations • 27 May 2024 • Linhan Wang, Kai Cheng, Shuo Lei, Shengkun Wang, Wei Yin, Chenyang Lei, Xiaoxiao Long, Chang-Tien Lu
We present DC-Gaussian, a new method for generating novel views from in-vehicle dash cam videos.
no code implementations • 8 Apr 2024 • Xiaoyan Cong, Yue Wu, Qifeng Chen, Chenyang Lei
Unlike most previous end-to-end automatic colorization algorithms, our framework allows for iterative and localized modifications of the colorization results because we explicitly model the coloring samples.
no code implementations • 5 Apr 2024 • Kei Ikemura, Yiming Huang, Felix Heide, Zhaoxiang Zhang, Qifeng Chen, Chenyang Lei
Existing depth sensors are imperfect and may provide inaccurate depth values in challenging scenarios, such as in the presence of transparent or reflective objects.
no code implementations • 21 Dec 2023 • Ilya Chugunov, David Shustin, Ruyu Yan, Chenyang Lei, Felix Heide
Each photo in an image burst can be considered a sample of a complex 3D scene: the product of parallax, diffuse and specular materials, scene motion, and illuminant variation.
no code implementations • 5 Aug 2023 • PRANEETH CHAKRAVARTHULA, Jipeng Sun, Xiao Li, Chenyang Lei, Gene Chou, Mario Bijelic, Johannes Froesch, Arka Majumdar, Felix Heide
The optical array is embedded on a metasurface that, at 700~nm height, is flat and sits on the sensor cover glass at 2. 5~mm focal distance from the sensor.
1 code implementation • ICCV 2023 • Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, Qifeng Chen
We also have a better zero-shot shape-aware editing ability based on the text-to-video model.
1 code implementation • CVPR 2023 • Chenyang Lei, Xuanchi Ren, Zhaoxiang Zhang, Qifeng Chen
Prior work usually requires specific guidance such as the flickering frequency, manual annotations, or extra consistent videos to remove the flicker.
1 code implementation • ICCV 2023 • Liyi Chen, Chenyang Lei, Ruihuang Li, Shuai Li, Zhaoxiang Zhang, Lei Zhang
Without introducing any external supervision and human priors, the proposed FPR effectively suppresses wrong activations from the background objects.
Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation
1 code implementation • ICCV 2023 • Huimin Wu, Chenyang Lei, Xiao Sun, Peng-Shuai Wang, Qifeng Chen, Kwang-Ting Cheng, Stephen Lin, Zhirong Wu
Self-supervised representation learning follows a paradigm of withholding some part of the data and tasking the network to predict it from the remaining part.
1 code implementation • CVPR 2023 • Jiaxin Xie, Hao Ouyang, Jingtan Piao, Chenyang Lei, Qifeng Chen
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views while preserving specific details of the input image.
1 code implementation • 5 Nov 2022 • Chenyang Lei, Xudong Jiang, Qifeng Chen
We propose a simple yet effective reflection-free cue for robust reflection removal from a pair of flash and ambient (no-flash) images.
1 code implementation • 27 Jan 2022 • Chenyang Lei, Yazhou Xing, Hao Ouyang, Qifeng Chen
A progressive propagation strategy with pseudo labels is also proposed to enhance DVP's performance on video propagation.
1 code implementation • CVPR 2022 • Chenyang Lei, Chenyang Qi, Jiaxin Xie, Na Fan, Vladlen Koltun, Qifeng Chen
We present a new data-driven approach with physics-based priors to scene-level normal estimation from a single polarization image.
no code implementations • 20 Aug 2021 • Chenyang Lei, Yue Wu, Qifeng Chen
We present a novel approach to automatic image colorization by imitating the imagination process of human experts.
no code implementations • 7 Aug 2021 • Chenyang Lei, Xuhua Huang, Chenyang Qi, Yankun Zhao, Wenxiu Sun, Qiong Yan, Qifeng Chen
Due to the lack of a large-scale reflection removal dataset with diverse real-world scenes, many existing reflection removal methods are trained on synthetic data plus a small amount of real-world data, which makes it difficult to evaluate the strengths or weaknesses of different reflection removal methods thoroughly.
1 code implementation • CVPR 2021 • Hao Ouyang, Zifan Shi, Chenyang Lei, Ka Lung Law, Qifeng Chen
To facilitate the learning of a simulator model, we collect a dataset of the 10, 000 raw images of 450 scenes with different exposure settings.
1 code implementation • CVPR 2021 • Chenyang Lei, Qifeng Chen
The flash-only image is equivalent to an image taken in a dark environment with only a flash on.
2 code implementations • NeurIPS 2020 • Chenyang Lei, Yazhou Xing, Qifeng Chen
Extensive quantitative and perceptual experiments show that our approach obtains superior performance than state-of-the-art methods on blind video temporal consistency.
1 code implementation • CVPR 2020 • Chenyang Lei, Xuhua Huang, Mengdi Zhang, Qiong Yan, Wenxiu Sun, Qifeng Chen
We present a novel formulation to removing reflection from polarized images in the wild.
1 code implementation • 30 Dec 2019 • Jiaxin Xie, Chenyang Lei, Zhuwen Li, Li Erran Li, Qifeng Chen
Our flow-to-depth layer is differentiable, and thus we can refine camera poses by maximizing the aggregated confidence in the camera pose refinement module.
4 code implementations • CVPR 2019 • Chenyang Lei, Qifeng Chen
We present a fully automatic approach to video colorization with self-regularization and diversity.