Novel View Synthesis
344 papers with code • 17 benchmarks • 34 datasets
Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.
See Wiki for more introdcutions.
The Synthesis method include: NeRF, MPI and so on.
( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )
Libraries
Use these libraries to find Novel View Synthesis models and implementationsDatasets
Most implemented papers
LeftRefill: Filling Right Canvas based on Left Reference through Generalized Text-to-Image Diffusion Model
As an exemplar, we leverage LeftRefill to address two different challenges: reference-guided inpainting and novel view synthesis, based on the pre-trained StableDiffusion.
Transformation-Grounded Image Generation Network for Novel 3D View Synthesis
Instead of taking a 'blank slate' approach, we first explicitly infer the parts of the geometry visible both in the input and novel views and then re-cast the remaining synthesis problem as image completion.
Monocular Neural Image Based Rendering with Continuous View Control
The approach is self-supervised and only requires 2D images and associated view transforms for training.
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape, which can not only model the joint location and rotation but also characterize the personalized body shape.
A Neural Rendering Framework for Free-Viewpoint Relighting
We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.
Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis
Also, we build a new dataset, namely iPER dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis.
pixelNeRF: Neural Radiance Fields from One or Few Images
This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).
Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video
We show that a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, e. g. a `bullet-time' video effect.
Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis
We present DietNeRF, a 3D neural scene representation estimated from a few images.
Neural RGB-D Surface Reconstruction
Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR.