Smooth Diffusion
This repository is the official Pytorch implementation for Smooth Diffusion.
Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models
Jiayi Guo*, Xingqian Xu*, Yifan Pu, Zanlin Ni, Chaofei Wang, Manushree Vasu, Shiji Song, Gao Huang, Humphrey Shi
smooth-diffusion.mp4
Smooth Diffusion is a new category of diffusion models that is simultaneously high-performing and smooth.
Our method formally introduces latent space smoothness to diffusion models like Stable Diffusion. This smoothness dramatically aids in: 1) improving the continuity of transitions in image interpolation, 2) reducing approximation errors in image inversion, and 3) better preserving unedited contents in image editing.
News
- [2023.12.08] Paper released!
ToDo
- Release code and model weights
- Gradio Demo
Overview
Smooth Diffusion (c) enforces the ratio between the variation of the input latent and the variation of the output prediction is a constant. We propose Training-time Smooth Diffusion (d) to optimize a "single-step snapshot" of the variation constraint in (c). DM: Diffusion model. Please refer to our paper for additional details.
Code
Coming soon.
Visualizations
Image Interpolation
Using the Smooth LoRA trained atop Stable Diffusion V1.5.
Integrating the above Smooth LoRA into other community models.
Image Inversion
Image Editing
Citation
If you find our work helpful, please star π this repo and cite π our paper. Thanks for your support!
@article{guo2023smooth,
title={Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models},
author={Jiayi Guo and Xingqian Xu and Yifan Pu and Zanlin Ni and Chaofei Wang and Manushree Vasu and Shiji Song and Gao Huang and Humphrey Shi},
journal={arXiv preprint arXiv:2312.04410},
year={2023}
}
Contact
guo-jy20 at mails dot tsinghua dot edu dot cn