no code implementations • 18 Mar 2024 • Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas Blattmann, Patrick Esser, Robin Rombach
Distillation methods, like the recently introduced adversarial diffusion distillation (ADD) aim to shift the model from many-shot to single-step inference, albeit at the cost of expensive and difficult optimization due to its reliance on a fixed pretrained DINOv2 discriminator.
1 code implementation • 5 Mar 2024 • Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Kyle Lacey, Alex Goodwin, Yannik Marek, Robin Rombach
Rectified flow is a recent generative model formulation that connects data and noise in a straight line.
4 code implementations • 28 Nov 2023 • Axel Sauer, Dominik Lorenz, Andreas Blattmann, Robin Rombach
We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1-4 steps while maintaining high image quality.
2 code implementations • None 2023 • Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, Varun Jampani, Robin Rombach
We then explore the impact of finetuning our base model on high-quality data and train a text-to-video model that is competitive with closed-source video generation.
4 code implementations • 4 Jul 2023 • Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, Robin Rombach
We present SDXL, a latent diffusion model for text-to-image synthesis.
3 code implementations • CVPR 2023 • Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, Karsten Kreis
We first pre-train an LDM on images only; then, we turn the image generator into a video generator by introducing a temporal dimension to the latent space diffusion model and fine-tuning on encoded image sequences, i. e., videos.
Ranked #5 on Text-to-Video Generation on MSR-VTT (CLIP-FID metric)
1 code implementation • 26 Jul 2022 • Robin Rombach, Andreas Blattmann, Björn Ommer
In RDMs, a set of nearest neighbors is retrieved from an external database during training for each training instance, and the diffusion model is conditioned on these informative samples.
2 code implementations • 25 Apr 2022 • Andreas Blattmann, Robin Rombach, Kaan Oktay, Jonas Müller, Björn Ommer
Much of this success is due to the scalability of these architectures and hence caused by a dramatic increase in model complexity and in the computational resources invested in training these models.
34 code implementations • CVPR 2022 • Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond.
Ranked #2 on Layout-to-Image Generation on COCO-Stuff 256x256
no code implementations • NeurIPS 2021 • Patrick Esser, Robin Rombach, Andreas Blattmann, Björn Ommer
Thus, in contrast to pure autoregressive models, it can solve free-form image inpainting and, in the case of conditional models, local, text-guided image modification without requiring mask-specific training.
Ranked #4 on Text-to-Image Generation on Conceptual Captions
2 code implementations • ICCV 2021 • Andreas Blattmann, Timo Milbich, Michael Dorkenwald, Björn Ommer
There will be distinctive movement, despite evident variations caused by the stochastic nature of our world.
1 code implementation • CVPR 2021 • Andreas Blattmann, Timo Milbich, Michael Dorkenwald, Björn Ommer
Given a static image of an object and a local poking of a pixel, the approach then predicts how the object would deform over time.
1 code implementation • CVPR 2021 • Michael Dorkenwald, Timo Milbich, Andreas Blattmann, Robin Rombach, Konstantinos G. Derpanis, Björn Ommer
Video understanding calls for a model to learn the characteristic interplay between static scene content and its dynamics: Given an image, the model must be able to predict a future progression of the portrayed scene and, conversely, a video should be explained in terms of its static image content and all the remaining characteristics not present in the initial frame.
1 code implementation • CVPR 2021 • Andreas Blattmann, Timo Milbich, Michael Dorkenwald, Björn Ommer
Using this representation, we are able to change the behavior of a person depicted in an arbitrary posture, or to even directly transfer behavior observed in a given video sequence.