DreamArtist
This repo is the official PyTorch implementation of "DreamArtist: Towards Controllable One-Shot Text-to-Image Generation via Contrastive Prompt-Tuning" with Stable-Diffusion-webui.
Stable-Diffusion-webui Extension Version : DreamArtist-sd-webui-extension
Everyone is an artist. Rome wasn't built in a day, but your artist dreams can be!
With just one training image DreamArtist learns the content and style in it, generating diverse high-quality images with high controllability. Embeddings of DreamArtist can be easily combined with additional descriptions, as well as two learned embeddings.
Setup and Running
Clone this repo.
git clone https://github.com/7eu7d7/DreamArtist-stable-diffusion
Following the instructions of webui to install.
Training and Usage
First create the positive and negative embeddings in DreamArtist Create Embedding
Tab.
Preview Setting
After that, the names
of the positive and negative embedding ({name}
and {name}-neg
) should be filled into the
txt2img Tab
with some common descriptions. This will ensure a correct preview image.
Train
Then, select positive embedding and set the parameters and image folder path in the Train
Tab to start training.
The corresponding negative embedding is loaded automatically.
If your VRAM is low or you want save time, you can uncheck the reconstruction
.
better to train without filewords
Remember to check the option below, otherwise the preview is wrong.
Inference
Fill the trained positive and negative embedding into txt2img to generate with DreamArtist prompt.
Tested models (need ema version):
- Stable Diffusion v1.5
- animefull-latest
- Anything v3.0
Embeddings can be transferred between different models of the same dataset.