no code implementations • ECCV 2020 • Zhong Li, Yu Ji, Jingyi Yu, Jinwei Ye
In this paper, we present a PIV solution that uses a compact lenslet-based light field camera to track dense particles floating in the fluid and reconstruct the 3D fluid flow.
1 code implementation • 31 May 2024 • Sijin Chen, Xin Chen, Anqi Pang, Xianfang Zeng, Wei Cheng, Yijun Fu, Fukun Yin, Yanru Wang, Zhibin Wang, Chi Zhang, Jingyi Yu, Gang Yu, Bin Fu, Tao Chen
The polygon mesh representation of 3D data exhibits great flexibility, fast rendering speed, and storage efficiency, which is widely preferred in various applications.
no code implementations • 11 May 2024 • Qing Wu, Xu Guo, Lixuan Chen, Dongming He, Hongjiang Wei, Xudong Wang, S. Kevin Zhou, Yifeng Zhang, Jingyi Yu, Yuyao Zhang
Specifically, we decompose the energy-dependent LACs into energy-independent densities and energy-dependent mass attenuation coefficients (MACs) by fully considering the physical model of X-ray absorption.
no code implementations • 27 Apr 2024 • Chenhe Du, Xiyue Lin, Qing Wu, Xuanyu Tian, Ying Su, Zhe Luo, Hongjiang Wei, S. Kevin Zhou, Jingyi Yu, Yuyao Zhang
However, the unsupervised nature of INR architecture imposes limited constraints on the solution space, particularly for the highly ill-posed reconstruction task posed by LACT and ultra-SVCT.
no code implementations • 15 Apr 2024 • Jiadi Cui, Junming Cao, Fuqiang Zhao, Zhipeng He, Yifan Chen, Yuhui Zhong, Lan Xu, Yujiao Shi, Yingliang Zhang, Jingyi Yu
Large garages are ubiquitous yet intricate scenes that present unique challenges due to their monotonous colors, repetitive patterns, reflective surfaces, and transparent vehicle glass.
1 code implementation • 7 Apr 2024 • Jiangnan Tang, Jingya Wang, Kaiyang Ji, Lan Xu, Jingyi Yu, Ye Shi
One of the biggest challenges to this task is the one-to-many mapping from sparse observations to dense full-body motions, which endowed inherent ambiguities.
no code implementations • 30 Mar 2024 • Juze Zhang, Jingyan Zhang, Zining Song, Zhanhe Shi, Chengfeng Zhao, Ye Shi, Jingyi Yu, Lan Xu, Jingya Wang
Humans naturally interact with both others and the surrounding multiple objects, engaging in various social activities.
no code implementations • 24 Mar 2024 • Jie Tian, Lingxiao Yang, Ran Ji, Yuexin Ma, Lan Xu, Jingyi Yu, Ye Shi, Jingya Wang
Here, the object motion diffusion model generates sequences of object motions based on gaze conditions, while the hand motion diffusion model produces hand motions based on the generated object motion.
no code implementations • 17 Mar 2024 • Qianyang Wu, Ye Shi, Xiaoshui Huang, Jingyi Yu, Lan Xu, Jingya Wang
This paper addresses new methodologies to deal with the challenging task of generating dynamic Human-Object Interactions from textual descriptions (Text2HOI).
no code implementations • 27 Feb 2024 • Yiming Ren, Xiao Han, Chengfeng Zhao, Jingya Wang, Lan Xu, Jingyi Yu, Yuexin Ma
For human-centric large-scale scenes, fine-grained modeling for 3D human global pose and shape is significant for scene understanding and can benefit many real-world applications.
no code implementations • 21 Feb 2024 • Yumeng Liu, Yaxun Yang, Youzhuo Wang, Xiaofei Wu, Jiamin Wang, Yichen Yao, Sören Schwertfeger, Sibei Yang, Wenping Wang, Jingyi Yu, Xuming He, Yuexin Ma
In this paper, we introduce RealDex, a pioneering dataset capturing authentic dexterous hand grasping motions infused with human behavioral patterns, enriched by multi-view and multimodal visual data.
no code implementations • 16 Feb 2024 • Haimin Luo, Min Ouyang, Zijun Zhao, Suyi Jiang, Longwen Zhang, Qixuan Zhang, Wei Yang, Lan Xu, Jingyi Yu
Hairstyle reflects culture and ethnicity at first glance.
1 code implementation • 5 Feb 2024 • Lingxiao Yang, Shutong Ding, Yifan Cai, Jingyi Yu, Jingya Wang, Ye Shi
We theoretically show the existence of manifold deviation by establishing a certain lower bound for the estimation error of the loss guidance.
no code implementations • 3 Feb 2024 • Youjia Wang, Yiwen Wu, Hengan Zhou, Hongyang Lin, Xingyue Peng, Yingwenqi Jiang, Yingsheng Zhu, Guanpeng Long, Yatu Zhang, Jingya Wang, Lan Xu, Jingyi Yu
In this paper, we propose IMUSIC to fill the gap, a novel path for facial expression capture using purely IMU signals, significantly distant from previous visual solutions. The key design in our IMUSIC is a trilogy.
no code implementations • 29 Jan 2024 • Kai He, Kaixin Yao, Qixuan Zhang, Lingjie Liu, Jingyi Yu, Lan Xu
We first introduce SewingGPT, a GPT-based architecture integrating cross-attention with text-conditioned embedding to generate sewing patterns with text guidance.
no code implementations • 28 Jan 2024 • Qingcheng Zhao, Pengyu Long, Qixuan Zhang, Dafei Qin, Han Liang, Longwen Zhang, Yingliang Zhang, Jingyi Yu, Lan Xu
The synthesis of 3D facial animations from speech has garnered considerable attention.
no code implementations • 15 Dec 2023 • Suyi Jiang, Haimin Luo, Haoran Jiang, Ziyu Wang, Jingyi Yu, Lan Xu
Recent months have witnessed rapid progress in 3D generation based on diffusion models.
no code implementations • 14 Dec 2023 • Han Liang, Jiacheng Bao, Ruichi Zhang, Sihan Ren, Yuecheng Xu, Sibei Yang, Xin Chen, Jingyi Yu, Lan Xu
At the subsequent fine-tuning stage, we introduce motion ControlNet, which incorporates text prompts as conditioning information, through a trainable copy of the pre-trained model and the proposed novel Mixture-of-Controllers (MoC) block.
1 code implementation • 13 Dec 2023 • Wenqian Zhang, Molin Huang, Yuxuan Zhou, Juze Zhang, Jingyi Yu, Jingya Wang, Lan Xu
We further provide a strong baseline method, BOTH2Hands, for the novel task: generating vivid two-hand motions from both implicit body dynamics and explicit text prompts.
no code implementations • 10 Dec 2023 • Chengfeng Zhao, Juze Zhang, Jiashen Du, Ziwei Shan, Junye Wang, Jingyi Yu, Jingya Wang, Lan Xu
In this paper, we present I'm-HOI, a monocular scheme to faithfully capture the 3D motions of both the human and object in a novel setting: using a minimal amount of RGB camera and object-mounted Inertial Measurement Unit (IMU).
no code implementations • 8 Dec 2023 • Pei Lin, Sihang Xu, Hongdi Yang, Yiran Liu, Xin Chen, Jingya Wang, Jingyi Yu, Lan Xu
We further present a strong baseline method HandDiffuse for the controllable motion generation of interacting hands using various controllers.
no code implementations • 6 Dec 2023 • Yuheng Jiang, Zhehao Shen, Penghao Wang, Zhuo Su, Yu Hong, Yingliang Zhang, Jingyi Yu, Lan Xu
Then, we utilize a 4D Gaussian optimization scheme with adaptive spatial-temporal regularizers to effectively balance the non-rigid prior and Gaussian updating.
no code implementations • 4 Dec 2023 • Jiakai Zhang, Qihe Chen, Yan Zeng, Wenyuan Gao, Xuming He, Zhijie Liu, Jingyi Yu
To address this, we introduce physics-informed generative cryo-electron microscopy (GenEM), which for the first time integrates physical-based cryo-EM simulation with a generative unpaired noise translation to generate physically correct synthetic cryo-EM datasets with realistic noises.
no code implementations • 3 Dec 2023 • Liao Wang, Kaixin Yao, Chengcheng Guo, Zhirui Zhang, Qiang Hu, Jingyi Yu, Lan Xu, Minye Wu
In this paper, we introduce VideoRF, the first approach to enable real-time streaming and rendering of dynamic radiance fields on mobile platforms.
1 code implementation • 20 Oct 2023 • Jingyi Yu, Zizhao Zhang, Shengfu Xia, Jizhang Sang
We extract more accurate bird's eye view (BEV) features guided by their linear structure, and then propose a hierarchical sparse map representation to further leverage the scalability of vectorized map elements and design a progressive decoding mechanism and a supervision strategy based on this representation.
no code implementations • 9 Oct 2023 • Ruiyang Liu, Jinxu Xiang, Bowen Zhao, Ran Zhang, Jingyi Yu, Changxi Zheng
To tackle the problem of efficiently editing neural implicit fields, we introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field designated for each tetrahedron within the explicit mesh.
1 code implementation • ICCV 2023 • Zhang Chen, Zhong Li, Liangchen Song, Lele Chen, Jingyi Yu, Junsong Yuan, Yi Xu
The spatial positions of their neural features are fixed on grid nodes and cannot well adapt to target signals.
2 code implementations • NeurIPS 2023 • Hanzhuo Huang, Yufan Feng, Cheng Shi, Lan Xu, Jingyi Yu, Sibei Yang
Text-to-video is a rapidly growing research area that aims to generate a semantic, identical, and temporal coherence sequence of frames that accurately align with the input text prompt.
no code implementations • ICCV 2023 • Jiajin Tang, Ge Zheng, Jingyi Yu, Sibei Yang
Its challenge lies in object categories available for the task being too diverse to be limited to a closed set of object vocabulary for traditional object detection.
1 code implementation • ICCV 2023 • Yiteng Xu, Peishan Cong, Yichen Yao, Runnan Chen, Yuenan Hou, Xinge Zhu, Xuming He, Jingyi Yu, Yuexin Ma
Human-centric scene understanding is significant for real-world applications, but it is extremely challenging due to the existence of diverse human poses and actions, complex human-environment interactions, severe occlusions in crowds, etc.
1 code implementation • NeurIPS 2023 • Qing Wu, Lixuan Chen, Ce Wang, Hongjiang Wei, S. Kevin Zhou, Jingyi Yu, Yuyao Zhang
In this work, we present a novel Polychromatic neural representation (Polyner) to tackle the challenging problem of CT imaging when metallic implants exist within the human body.
2 code implementations • NeurIPS 2023 • Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, Tao Chen
Building upon this "motion vocabulary", we perform language modeling on both motion and text in a unified manner, treating human motion as a specific language.
Ranked #4 on Motion Captioning on HumanML3D
no code implementations • 21 Apr 2023 • Binbin Huang, Xingyue Peng, Siyuan Shen, Suan Xia, Ruiqian Li, Yanhua Yu, Yuehan Wang, Shenghua Gao, Wenzheng Chen, Shiying Li, Jingyi Yu
The core of our method is to put the object nearby diffuse walls and augment the LOS scan in the front view with the NLOS scans from the surrounding walls, which serve as virtual ``mirrors'' to trap lights toward the object.
1 code implementation • 12 Apr 2023 • Han Liang, Wenqian Zhang, Wenxuan Li, Jingyi Yu, Lan Xu
Then, we propose a novel representation for motion input in our interaction diffusion model, which explicitly formulates the global relations between the two performers in the world frame.
Ranked #3 on Motion Synthesis on InterHuman
no code implementations • CVPR 2023 • Liao Wang, Qiang Hu, Qihan He, Ziyu Wang, Jingyi Yu, Tinne Tuytelaars, Lan Xu, Minye Wu
The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes.
no code implementations • 4 Apr 2023 • Wuwei Ren, Siyuan Shen, Linlin Li, Shengyu Gao, Yuehan Wang, Liangtao Gu, Shiying Li, Xingjun Zhu, Jiahua Jiang, Jingyi Yu
Light scattering imposes a major obstacle for imaging objects seated deeply in turbid media, such as biological tissues and foggy air.
no code implementations • ICCV 2023 • Youjia Zhang, Teng Xu, Junqing Yu, Yuteng Ye, Junle Wang, Yanqing Jing, Jingyi Yu, Wei Yang
Recovering the physical attributes of an object's appearance from its images captured under an unknown illumination is challenging yet essential for photo-realistic rendering.
no code implementations • 28 Mar 2023 • Xinhang Liu, Yan Zeng, Yifan Qin, Hao Li, Jiakai Zhang, Lan Xu, Jingyi Yu
Cryo-electron microscopy (cryo-EM) allows for the high-resolution reconstruction of 3D structures of proteins and other biomolecules.
no code implementations • 7 Mar 2023 • Haimin Luo, Siyuan Zhang, Fuqiang Zhao, Haotian Jing, Penghao Wang, Zhenxiao Yu, Dongxue Yan, Junran Ding, Boyuan Zhang, Qiang Hu, Shu Yin, Lan Xu, Jingyi Yu
Using such a cloud platform compatible with neural rendering, we further showcase the capabilities of our cloud radiance rendering through a series of applications, ranging from cloud VR/AR rendering.
1 code implementation • 2 Feb 2023 • Juze Zhang, Ye Shi, Yuexin Ma, Lan Xu, Jingyi Yu, Jingya Wang
This paper presents an inverse kinematic optimization layer (IKOL) for 3D human pose and shape estimation that leverages the strength of both optimization- and regression-based methods within an end-to-end framework.
Ranked #30 on 3D Human Pose Estimation on 3DPW
no code implementations • CVPR 2023 • Juze Zhang, Haimin Luo, Hongdi Yang, Xinru Xu, Qianyang Wu, Ye Shi, Jingyi Yu, Lan Xu, Jingya Wang
We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of $\sim$75M frames on 10 subjects interacting with 23 objects.
no code implementations • CVPR 2023 • Taotao Zhou, Kai He, Di wu, Teng Xu, Qixuan Zhang, Kuixiang Shao, Wenzheng Chen, Lan Xu, Jingyi Yu
UltraStage will be publicly available to the community to stimulate significant future developments in various human modeling and rendering tasks.
1 code implementation • CVPR 2023 • Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, Jingyi Yu, Gang Yu
We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors.
Ranked #2 on Motion Synthesis on HumanAct12
no code implementations • 30 Nov 2022 • Peishan Cong, Yiteng Xu, Yiming Ren, Juze Zhang, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma
Motivated by this, we propose a monocular camera and single LiDAR-based method for 3D multi-person pose estimation in large-scale scenes, which is easy to deploy and insensitive to light.
no code implementations • 23 Oct 2022 • Qing Wu, Xin Li, Hongjiang Wei, Jingyi Yu, Yuyao Zhang
NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram.
1 code implementation • 18 Sep 2022 • Fuqiang Zhao, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu
In this paper, we present a comprehensive neural approach for high-quality reconstruction, compression, and rendering of human performances from dense multi-view videos.
no code implementations • 14 Sep 2022 • Zesong Qiu, Yuwei Li, Dongming He, Qixuan Zhang, Longwen Zhang, Yinghao Zhang, Jingya Wang, Lan Xu, Xudong Wang, Yuyao Zhang, Jingyi Yu
Named after the fossils of one of the oldest known human ancestors, our LUCY dataset contains high-quality Computed Tomography (CT) scans of the complete human head before and after orthognathic surgeries, critical for evaluating surgery results.
1 code implementation • 12 Sep 2022 • Qing Wu, Ruimin Feng, Hongjiang Wei, Jingyi Yu, Yuyao Zhang
Compared with recent related works that solve similar problems using implicit neural representation network (INR), our essential contribution is an effective and simple re-projection strategy that pushes the tomography image reconstruction quality over supervised deep learning CT reconstruction works.
no code implementations • 9 Sep 2022 • Ziyu Wang, Yu Deng, Jiaolong Yang, Jingyi Yu, Xin Tong
Experiments show that our method can successfully learn the generative model from unstructured monocular images and well disentangle the shape and appearance for objects (e. g., chairs) with large topological variance.
no code implementations • 16 Jul 2022 • Juze Zhang, Jingya Wang, Ye Shi, Fei Gao, Lan Xu, Jingyi Yu
This method first uses 2. 5D pose and geometry information to infer camera-centric root depths in a forward pass, and then exploits the root depths to further improve representation learning of 2. 5D pose estimation in a backward pass.
no code implementations • 3 Jul 2022 • Youjia Wang, Teng Xu, Yiwen Wu, Minzhang Li, Wenzheng Chen, Lan Xu, Jingyi Yu
We extend Total Relighting to fix this problem by unifying its multi-view input normal maps with the physical face model.
no code implementations • 30 May 2022 • Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan Xu, Yuexin Ma
We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly.
1 code implementation • 26 May 2022 • Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu
We present an efficient frequency-based neural representation termed PREF: a shallow MLP augmented with a phasor volume that covers significant border spectra than previous Fourier feature mapping or Positional Encoding.
no code implementations • CVPR 2022 • Jialian Li, Jingyi Zhang, Zhiyong Wang, Siqi Shen, Chenglu Wen, Yuexin Ma, Lan Xu, Jingyi Yu, Cheng Wang
Quantitative and qualitative experiments show that our method outperforms the techniques based only on RGB images.
Ranked #3 on 3D Human Pose Estimation on SLOPER4D (using extra training data)
no code implementations • 17 Mar 2022 • Han Liang, Yannan He, Chengfeng Zhao, Mutian Li, Jingya Wang, Jingyi Yu, Lan Xu
Monocular 3D motion capture (mocap) is beneficial to many applications.
Ranked #1 on Pose Estimation on 3DPW
2 code implementations • 17 Mar 2022 • Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su
We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF.
Ranked #3 on Novel View Synthesis on X3D
1 code implementation • CVPR 2022 • Yudi Dai, Yitai Lin, Chenglu Wen, Siqi Shen, Lan Xu, Jingyi Yu, Yuexin Ma, Cheng Wang
We propose Human-centered 4D Scene Capture (HSC4D) to accurately and efficiently create a dynamic digital world, containing large-scale indoor-outdoor scenes, diverse human motions, and rich interactions between humans and environments.
no code implementations • 8 Mar 2022 • Ziyu Wang, Wei Yang, Junming Cao, Lan Xu, Junqing Yu, Jingyi Yu
We present a novel neural refractive field(NeReF) to recover wavefront of transparent fluids by simultaneously estimating the surface position and normal of the fluid front.
no code implementations • CVPR 2022 • Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen Guo, Minye Wu, Jingyi Yu, Lan Xu
In this paper, we propose NeuralHOFusion, a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors.
no code implementations • 22 Feb 2022 • Yingqian Wang, Longguang Wang, Gaochang Wu, Jungang Yang, Wei An, Jingyi Yu, Yulan Guo
In this paper, we propose a generic mechanism to disentangle these coupled information for LF image processing.
no code implementations • CVPR 2022 • Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu
In this paper, we present a novel Fourier PlenOctree (FPO) technique to tackle efficient neural modeling and real-time rendering of dynamic scenes captured under the free-view video (FVV) setting.
no code implementations • 12 Feb 2022 • Jiakai Zhang, Liao Wang, Xinhang Liu, Fuqiang Zhao, Minzhang Li, Haizhao Dai, Boyuan Zhang, Wei Yang, Lan Xu, Jingyi Yu
We further develop a hybrid neural-rasterization rendering framework to support consumer-level VR headsets so that the aforementioned volumetric video viewing and editing, for the first time, can be conducted immersively in virtual 3D space.
1 code implementation • 11 Feb 2022 • Haimin Luo, Teng Xu, Yuheng Jiang, Chenglin Zhou, QIwei Qiu, Yingliang Zhang, Wei Yang, Lan Xu, Jingyi Yu
Our ARTEMIS enables interactive motion control, real-time animation, and photo-realistic rendering of furry animals.
no code implementations • 11 Feb 2022 • Longwen Zhang, Chuxiao Zeng, Qixuan Zhang, Hongyang Lin, Ruixiang Cao, Wei Yang, Lan Xu, Jingyi Yu
In this paper, we present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets.
no code implementations • 9 Feb 2022 • Yuwei Li, Longwen Zhang, Zesong Qiu, Yingwenqi Jiang, Nianyi Li, Yuexin Ma, Yuyao Zhang, Lan Xu, Jingyi Yu
Emerging Metaverse applications demand reliable, accurate, and photorealistic reproductions of human hands to perform sophisticated operations as if in the physical world.
no code implementations • 29 Dec 2021 • Zhengqing Pan, Ruiqian Li, Tian Gao, Zi Wang, Ping Liu, Siyuan Shen, Tao Wu, Jingyi Yu, Shiying Li
There has been an increasing interest in deploying non-line-of-sight (NLOS) imaging systems for recovering objects behind an obstacle.
no code implementations • CVPR 2022 • Fuqiang Zhao, Wei Yang, Jiakai Zhang, Pei Lin, Yingliang Zhang, Jingyi Yu, Lan Xu
The raw HumanNeRF can already produce reasonable rendering on sparse video inputs of unseen subjects and camera settings.
1 code implementation • 27 Oct 2021 • Qing Wu, Yuwei Li, Yawen Sun, Yan Zhou, Hongjiang Wei, Jingyi Yu, Yuyao Zhang
In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images.
no code implementations • 10 Sep 2021 • Tobias Jacobs, Jingyi Yu, Julia Gastinger, Timo Sztyler
We present a novel methodology to build powerful predictive process models.
no code implementations • 5 Sep 2021 • Yuqi Ding, Zhang Chen, Yu Ji, Jingyi Yu, Jinwei Ye
Recovering 3D geometry of underwater scenes is challenging because of non-linear refraction of light at the water-air interface caused by the camera housing.
no code implementations • 12 Aug 2021 • Liao Wang, Ziyu Wang, Pei Lin, Yuheng Jiang, Xin Suo, Minye Wu, Lan Xu, Jingyi Yu
To fill this gap, in this paper we propose a neural interactive bullet-time generator (iButter) for photo-realistic human free-viewpoint rendering from dense RGB streams, which enables flexible and interactive design for human bullet-time visual effects.
no code implementations • 1 Aug 2021 • Guoxing Sun, Xin Chen, Yizhang Chen, Anqi Pang, Pei Lin, Yuheng Jiang, Lan Xu, Jingya Wang, Jingyi Yu
In this paper, we propose a neural human performance capture and rendering system to generate both high-quality geometry and photo-realistic texture of both human and objects under challenging interaction scenarios in arbitrary novel views, from only sparse RGB streams.
no code implementations • 30 Jul 2021 • Youjia Wang, Taotao Zhou, Minzhang Li, Teng Xu, Minye Wu, Lan Xu, Jingyi Yu
We present a neural relighting and expression transfer technique to transfer the facial expressions from a source performer to a portrait video of a target performer while enabling dynamic relighting.
no code implementations • 14 Jul 2021 • Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu
To fill this gap, in this paper we propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal and spatial redundancy to generate photo-realistic free-view output of human activities.
no code implementations • 29 Jun 2021 • Qing Wu, Yuwei Li, Lan Xu, Ruiming Feng, Hongjiang Wei, Qing Yang, Boliang Yu, Xiaozhao Liu, Jingyi Yu, Yuyao Zhang
For collecting high-quality high-resolution (HR) MR image, we propose a novel image reconstruction network named IREM, which is trained on multiple low-resolution (LR) MR images and achieve an arbitrary up-sampling rate for HR image reconstruction.
1 code implementation • 21 Jun 2021 • Yuwei Li, Minye Wu, Yuyao Zhang, Lan Xu, Jingyi Yu
Hand modeling is critical for immersive VR/AR, action understanding, or human healthcare.
1 code implementation • 30 Apr 2021 • Jiakai Zhang, Xinhang Liu, Xinyi Ye, Fuqiang Zhao, Yanshun Zhang, Minye Wu, Yingliang Zhang, Lan Xu, Jingyi Yu
Such layered representation supports fully perception and realistic manipulation of the dynamic scene whilst still supporting a free viewing experience in a wide range.
1 code implementation • 23 Apr 2021 • Xin Chen, Anqi Pang, Wei Yang, Yuexin Ma, Lan Xu, Jingyi Yu
In this paper, we propose SportsCap -- the first approach for simultaneously capturing 3D human motions and understanding fine-grained actions from monocular challenging sports video input.
no code implementations • 6 Apr 2021 • Ziyu Wang, Liao Wang, Fuqiang Zhao, Minye Wu, Lan Xu, Jingyi Yu
In this paper, we propose MirrorNeRF - a one-shot neural portrait free-viewpoint rendering approach using a catadioptric imaging system with multiple sphere mirrors and a single high-resolution digital camera, which is the first to combine neural radiance field with catadioptric imaging so as to enable one-shot photo-realistic human portrait reconstruction and rendering, in a low-cost and casual capture setting.
1 code implementation • 5 Apr 2021 • Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, Jingyi Yu
In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views.
1 code implementation • ICCV 2021 • Longwen Zhang, Qixuan Zhang, Minye Wu, Jingyi Yu, Lan Xu
In this paper, we propose a neural approach for real-time, high-quality and coherent video portrait relighting, which jointly models the semantic, temporal and lighting consistency using a new dynamic OLAT dataset.
1 code implementation • ICCV 2021 • Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, Jingyi Yu
We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field (NeRF) reconstruction for the complex scenarios with unknown and even randomly initialized camera poses.
2 code implementations • ICCV 2021 • Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
1 code implementation • 2 Jan 2021 • Siyuan Shen, Zi Wang, Ping Liu, Zhengqing Pan, Ruiqian Li, Tian Gao, Shiying Li, Jingyi Yu
We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging.
no code implementations • 13 Aug 2020 • Quan Meng, Jiakai Zhang, Qiang Hu, Xuming He, Jingyi Yu
We present a novel real-time line segment detection scheme called Line Graph Neural Network (LGNN).
1 code implementation • 7 Jul 2020 • Anpei Chen, Ruiyang Liu, Ling Xie, Zhang Chen, Hao Su, Jingyi Yu
To address this issue, we propose a SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space.
1 code implementation • 27 May 2020 • Xin Chen, Yuwei Li, Xi Luo, Tianjia Shao, Jingyi Yu, Kun Zhou, Youyi Zheng
We base our work on the assumption that most human-made objects are constituted by parts and these parts can be well represented by generalized primitives.
1 code implementation • 20 Apr 2020 • Xiaoxu Li, Dongliang Chang, Zhanyu Ma, Zheng-Hua Tan, Jing-Hao Xue, Jie Cao, Jingyi Yu, Jun Guo
A deep neural network of multiple nonlinear layers forms a large function space, which can easily lead to overfitting when it encounters small-sample data.
1 code implementation • 17 Dec 2019 • Yingqian Wang, Longguang Wang, Jungang Yang, Wei An, Jingyi Yu, Yulan Guo
Specifically, spatial and angular features are first separately extracted from input LFs, and then repetitively interacted to progressively incorporate spatial and angular information.
2 code implementations • CVPR 2020 • Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, Jingyi Yu
We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.
1 code implementation • 31 Aug 2019 • Jing Jin, Junhui Hou, Jie Chen, Huanqiang Zeng, Sam Kwong, Jingyi Yu
Specifically, the coarse sub-aperture image (SAI) synthesis module first explores the scene geometry from an unstructured sparsely-sampled LF and leverages it to independently synthesize novel SAIs, in which a confidence-based blending strategy is proposed to fuse the information from different input SAIs, giving an intermediate densely-sampled LF.
1 code implementation • 23 Jul 2019 • Jing Jin, Junhui Hou, Jie Chen, Sam Kwong, Jingyi Yu
To the best of our knowledge, this is the first end-to-end deep learning method for reconstructing a high-resolution LF image with a hybrid input.
1 code implementation • 30 May 2019 • Ziheng Zhang, Anpei Chen, Ling Xie, Jingyi Yu, Shenghua Gao
Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps.
no code implementations • 15 Apr 2019 • Zhong Li, Jinwei Ye, Yu Ji, Hao Sheng, Jingyi Yu
Particle Imaging Velocimetry (PIV) estimates the flow of fluid by analyzing the motion of injected particles.
no code implementations • 9 Apr 2019 • Mingyuan Zhou, Yu Ji, Yuqi Ding, Jinwei Ye, S. Susan Young, Jingyi Yu
In this paper, we introduce a novel concentric multi-spectral light field (CMSLF) design that is able to recover the shape and reflectance of surfaces with arbitrary material in one shot.
1 code implementation • 4 Apr 2019 • Xin Chen, Anqi Pang, Yang Wei, Lan Xui, Jingyi Yu
In this paper, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single 3D human scan, which enables numerous applications such as virtual try-on, biometrics and body evaluation.
no code implementations • 4 Apr 2019 • Minye Wu, Haibin Ling, Ning Bi, Shenghua Gao, Hao Sheng, Jingyi Yu
A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e. g. human), static cameras, and/or camera calibration.
no code implementations • 4 Apr 2019 • Zhang Chen, Yu Ji, Mingyuan Zhou, Sing Bing Kang, Jingyi Yu
We avoid the need for spatial constancy of albedo; instead, we use a new measure for albedo similarity that is based on the albedo norm profile.
1 code implementation • ICCV 2019 • Anpei Chen, Zhang Chen, Guli Zhang, Ziheng Zhang, Kenny Mitchell, Jingyi Yu
Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis.
no code implementations • 7 Mar 2019 • Yuanxi Ma, Cen Wang, Shiying Li, Jingyi Yu
Robust segmentation of hair from portrait images remains challenging: hair does not conform to a uniform shape, style or even color; dark hair in particular lacks features.
no code implementations • 15 Oct 2018 • Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, Jingyi Yu
A surface light field represents the radiance of rays originating from any points on the surface in any directions.
no code implementations • CVPR 2018 • Zhong Li, Minye Wu, Wangyiteng Zhou, Jingyi Yu
The availability of affordable 3D full body reconstruction systems has given rise to free-viewpoint video (FVV) of human shapes.
no code implementations • ECCV 2018 • Shi Jin, Ruiynag Liu, Yu Ji, Jinwei Ye, Jingyi Yu
The bullet-time effect, presented in feature film ``The Matrix", has been widely adopted in feature films and TV commercials to create an amazing stopping-time illusion.
no code implementations • ECCV 2018 • Ziheng Zhang, Yanyu Xu, Jingyi Yu, Shenghua Gao
Considering that the 360° videos are usually stored with equirectangular panorama, we propose to implement the spherical convolution on panorama by stretching and rotating the kernel based on the location of patch to be convolved.
no code implementations • 7 Aug 2018 • Qi Zhang, Chunping Zhang, Jinbo Ling, Qing Wang, Jingyi Yu
Based on the MPC model and projective transformation, we propose a calibration algorithm to verify our light field camera model.
no code implementations • CVPR 2018 • Yang Yang, Shi Jin, Ruiyang Liu, Sing Bing Kang, Jingyi Yu
The recovered layout is then used to guide shape estimation of the remaining objects using their normal information.
no code implementations • CVPR 2018 • Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, Shenghua Gao
This paper explores gaze prediction in dynamic $360^circ$ immersive videos, emph{i. e.}, based on the history scan path and VR contents, we predict where a viewer will look at an upcoming time.
no code implementations • CVPR 2018 • Can Chen, Scott McCloskey, Jingyi Yu
With the rise of misinformation spread via social media channels, enabled by the increasing automation and realism of image manipulation tools, image forensics is an increasingly relevant problem.
no code implementations • 26 Mar 2018 • Huangjie Yu, Guli Zhang, Yuanxi Ma, Yingliang Zhang, Jingyi Yu
We present a novel semantic light field (LF) refocusing technique that can achieve unprecedented see-through quality.
no code implementations • 31 Jan 2018 • Zhong Li, Yu Ji, Wei Yang, Jinwei Ye, Jingyi Yu
In multi-view human body capture systems, the recovered 3D geometry or even the acquired imagery data can be heavily corrupted due to occlusions, noise, limited field of- view, etc.
no code implementations • CVPR 2018 • Xuan Cao, Zhang Chen, Anpei Chen, Xin Chen, Cen Wang, Jingyi Yu
We present a novel 3D face reconstruction technique that leverages sparse photometric stereo (PS) and latest advances on face registration/modeling from a single image.
no code implementations • 29 Nov 2017 • Xinqing Guo, Zhang Chen, Siyuan Li, Yang Yang, Jingyi Yu
We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching.
1 code implementation • 17 Oct 2017 • Li Yi, Lin Shao, Manolis Savva, Haibin Huang, Yang Zhou, Qirui Wang, Benjamin Graham, Martin Engelcke, Roman Klokov, Victor Lempitsky, Yuan Gan, Pengyu Wang, Kun Liu, Fenggen Yu, Panpan Shui, Bingyang Hu, Yan Zhang, Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Minki Jeong, Jaehoon Choi, Changick Kim, Angom Geetchandra, Narasimha Murthy, Bhargava Ramu, Bharadwaj Manda, M. Ramanathan, Gautam Kumar, P Preetham, Siddharth Srivastava, Swati Bhugra, Brejesh lall, Christian Haene, Shubham Tulsiani, Jitendra Malik, Jared Lafer, Ramsey Jones, Siyuan Li, Jie Lu, Shi Jin, Jingyi Yu, Qi-Xing Huang, Evangelos Kalogerakis, Silvio Savarese, Pat Hanrahan, Thomas Funkhouser, Hao Su, Leonidas Guibas
We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database.
1 code implementation • 9 Oct 2017 • Yanyu Xu, Shenghua Gao, Junru Wu, Nianyi Li, Jingyi Yu
Specifically, we propose to decompose a personalized saliency map (referred to as PSM) into a universal saliency map (referred to as USM) predictable by existing saliency detection models and a new discrepancy map across users that characterizes personalized saliency.
no code implementations • ICCV 2017 • Yujia Xue, Kang Zhu, Qiang Fu, Xilin Chen, Jingyi Yu
In this paper, we present a single camera hyperspectral light field imaging solution that we call Snapshot Plenoptic Imager (SPI).
no code implementations • ICCV 2017 • Yingliang Zhang, Peihong Yu, Wei Yang, Yuanxi Ma, Jingyi Yu
In this paper, we explore using light fields captured by plenoptic cameras or camera arrays as inputs.
no code implementations • 4 Sep 2017 • Kang Zhu, Yujia Xue, Qiang Fu, Sing Bing Kang, Xilin Chen, Jingyi Yu
There are two parts to extracting scene depth.
no code implementations • 2 Aug 2017 • Zhang Chen, Xinqing Guo, Siyuan Li, Xuan Cao, Jingyi Yu
Depth from defocus (DfD) and stereo matching are two most studied passive depth sensing schemes.
no code implementations • CVPR 2017 • Can Chen, Scott McCloskey, Jingyi Yu
Recent advances on image manipulation techniques have made image forgery detection increasingly more challenging.
no code implementations • 28 Mar 2017 • Wei Liu, Xiaogang Chen, Chunhua Shen, Jingyi Yu, Qiang Wu, Jie Yang
In this paper, we propose a general framework for Robust Guided Image Filtering (RGIF), which contains a data term and a smoothness term, to solve the two issues mentioned above.
no code implementations • 15 Aug 2016 • Hao Zhu, Qing Wang, Jingyi Yu
Occlusion is one of the most challenging problems in depth estimation.
no code implementations • CVPR 2016 • Nianyi Li, Haiting Lin, Bilin Sun, Mingyuan Zhou, Jingyi Yu
In this paper, we present a novel LF sampling scheme by exploiting a special non-centric camera called the crossed-slit or XSlit camera.
no code implementations • ICCV 2015 • Haiting Lin, Can Chen, Sing Bing Kang, Jingyi Yu
The other is a data consistency measure based on analysis-by-synthesis, i. e., the difference between the synthesized focal stack given the hypothesized depth map and that from the LF.
no code implementations • 19 Jun 2015 • Xuehui Wang, Jinli Suo, Jingyi Yu, Yongdong Zhang, Qionghai Dai
Firstly, we capture the scene with a pinhole and analyze the scene content to determine primary edge orientations.
no code implementations • 17 Jun 2015 • Wei Liu, Yijun Li, Xiaogang Chen, Jie Yang, Qiang Wu, Jingyi Yu
A popular solution is upsampling the obtained noisy low resolution depth map with the guidance of the companion high resolution color image.
no code implementations • 15 Jun 2015 • Qiaosong Wang, Haiting Lin, Yi Ma, Sing Bing Kang, Jingyi Yu
We propose a novel approach that jointly removes reflection or translucent layer from a scene and estimates scene depth.
no code implementations • ICCV 2015 • Wei Yang, Haiting Lin, Sing Bing Kang, Jingyi Yu
We first conduct a comprehensive analysis to characterize DDAR, infer object depth from its AR, and model recoverable depth range, sensitivity, and error.
no code implementations • CVPR 2015 • Wei Yang, Yu Ji, Haiting Lin, Yang Yang, Sing Bing Kang, Jingyi Yu
This enables a sparsity-prior based solution for iteratively recovering the surface normal, the surface albedo, and the visibility function from a small number of images.
no code implementations • CVPR 2015 • Nianyi Li, Bilin Sun, Jingyi Yu
In this paper, we present a unified saliency detection framework for handling heterogenous types of input data.
no code implementations • CVPR 2014 • Zhaolin Xiao, Qing Wang, Guoqing Zhou, Jingyi Yu
When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts.
no code implementations • CVPR 2014 • Yu Ji, Jinwei Ye, Sing Bing Kang, Jingyi Yu
In particular, we show that linear tone mapping eliminates ringing but incurs severe contrast loss, while non-linear tone mapping functions such as Gamma curves slightly enhances contrast but introduces ringing.
no code implementations • CVPR 2014 • Nianyi Li, Jinwei Ye, Yu Ji, Haibin Ling, Jingyi Yu
Existing saliency detection approaches use images as inputs and are sensitive to foreground/background similarities, complex background textures, and occlusions.
no code implementations • CVPR 2014 • Can Chen, Haiting Lin, Zhan Yu, Sing Bing Kang, Jingyi Yu
Our bilateral consistency metric is used to indicate the probability of occlusions by analyzing the SCams.
no code implementations • CVPR 2014 • Erkang Cheng, Yu Pang, Ying Zhu, Jingyi Yu, Haibin Ling
Robust tracking of deformable object like catheter or vascular structures in X-ray images is an important technique used in image guided medical interventions for effective motion compensation and dynamic multi-modality image fusion.
no code implementations • CVPR 2013 • Yu Ji, Jinwei Ye, Jingyi Yu
By observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths.
no code implementations • CVPR 2013 • Xiaogang Chen, Sing Bing Kang, Jie Yang, Jingyi Yu
PatchGPs treat image patches as nodes and patch differences as edge weights for computing the shortest (geodesic) paths.
no code implementations • CVPR 2013 • Jinwei Ye, Yu Ji, Jingyi Yu
Specifically, we prove that parallel 3D lines map to 2D curves in an XSlit image and they converge at an XSlit Vanishing Point (XVP).