3D Generation
77 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in 3D Generation
Libraries
Use these libraries to find 3D Generation models and implementationsMost implemented papers
Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting
Building upon our MVControl architecture, we employ a unique hybrid diffusion guidance method to direct the optimization process.
DreamView: Injecting View-specific Text Guidance into Text-to-3D Generation
Text-to-3D generation, which synthesizes 3D assets according to an overall text description, has significantly progressed.
RealPoint3D: Point Cloud Generation from a Single Image with Complex Background
Then, the image together with the retrieved shape model is fed into the proposed network to generate the fine-grained 3D point cloud.
Generative PointNet: Deep Energy-Based Learning on Unordered Point Sets for 3D Generation, Reconstruction and Classification
We propose a generative model of unordered point sets, such as point clouds, in the form of an energy-based model, where the energy function is parameterized by an input-permutation-invariant bottom-up neural network.
3D Pose Transfer with Correspondence Learning and Mesh Refinement
It aims to transfer the pose of a source mesh to a target mesh and keep the identity (e. g., body shape) of the target mesh.
SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation
We further complement the evaluation metrics of 3D generative models with the shading-image-based Fr\'echet inception distance (FID) scores to better assess visual quality and shape distribution of the generated shapes.
Neural Wavelet-domain Diffusion for 3D Shape Generation
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation
In this paper, we present RenderDiffusion, the first diffusion model for 3D generation and inference, trained using only monocular 2D supervision.
Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation
We propose to apply chain rule on the learned gradients, and back-propagate the score of a diffusion model through the Jacobian of a differentiable renderer, which we instantiate to be a voxel radiance field.
NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors
Formulating single-view reconstruction as an image-conditioned 3D generation problem, we optimize the NeRF representations by minimizing a diffusion loss on its arbitrary view renderings with a pretrained image diffusion model under the input-view constraint.