• Stars
    star
    3,090
  • Rank 13,989 (Top 0.3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 1 year ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc.

Edit Anything by Segment-Anything

HuggingFace space

This is an ongoing project aims to Edit and Generate Anything in an image, powered by Segment Anything, ControlNet, BLIP2, Stable Diffusion, etc.

Any forms of contribution and suggestion are very welcomed!

NewsπŸ”₯

2023/08/09 - Revise UI and code, fixed multiple known issues.

2023/07/25 - EditAnything is accepted by the ACM MM demo track.

2023/06/09 - Support cross-image region drag and merge, unleash creative fusion!

2023/05/24 - Support multiple high-quality character editing: clothes, haircut, colored contact lenses.

2023/05/22 - Support sketch to image by adjusting mask align strength in sketch2image.py!

2023/05/13 - Support interactive segmentation with click operation!

2023/05/11 - Support tile model for detail refinement!

2023/05/04 - New demos of Beauty/Handsome Edit/Generation is released!

2023/05/04 - ControlNet-based inpainting model on any lora model is supported now. EditAnything can operate on any base/lord models without the requirements of inpainting model.

More update logs.

2023/05/01 - Models V0.4 based on Stable Diffusion 1.5/2.1 are released. New models are trained with more data and iterations.Model Zoo

2023/04/20 - We support the Customized editing with DreamBooth.

2023/04/17 - We support the SAM mask to semantic segmentation mask.

2023/04/17 - We support different alignment degrees bettween edited parts and the SAM mask, check it out on DEMO!

2023/04/15 - Gradio demo on Huggingface is released!

2023/04/14 - New model trained with LAION dataset is released.

2023/04/13 - Support pretrained model auto downloading and gradio in sam2image.py.

2023/04/12 - An initial version of text-guided edit-anything is in sam2groundingdino_edit.py(object-level) and sam2vlpart_edit.py(part-level).

2023/04/10 - An initial version of edit-anything is in sam2edit.py.

2023/04/10 - We transfer the pretrained model into diffusers style, the pretrained model is auto loaded when using sam2image_diffuser.py. Now you can combine our pretrained model with different base models easily!

2023/04/09 - We released a pretrained model of StableDiffusion based ControlNet that generate images conditioned by SAM segmentation.

Features

Try our HuggingFace DEMOπŸ”₯πŸ”₯πŸ”₯

Unleash creative fusion: Cross-image region drag and merge!πŸ”₯

image image

Clothes editing!πŸ”₯

image

Haircut editing!πŸ”₯

image

Colored contact lenses!πŸ”₯

image

Human replacement with tile refinement!πŸ”₯

image

Draw your Sketch and Generate your Image!πŸ”₯

prompt: "a paint of a tree in the ground with a river."

image image image
More demos.

prompt: "a paint, river, mountain, sun, cloud, beautiful field."

image image image

prompt: "a man, midsplit center parting hair, HD."

image image image

prompt: "a woman, long hair, detailed facial details, photorealistic, HD, beautiful face, solo, candle, brown hair, blue eye."

image image image

Also, you could use the generated image and sam model to refine your sketch definitely!

Generate/Edit your beauty!!!πŸ”₯πŸ”₯πŸ”₯

Edit Your beauty and Generate Your beauty

image image

Customized editing with layout alignment control.

image

EditAnything+DreamBooth: Train a customized DreamBooth Model with `tools/train_dreambooth_inpaint.py` and replace the base model in `sam2edit.py` with the trained model.

Image Editing with layout alignment control.

image

Keep the layout and Generate your season!

original paint SAM

Human Prompt: "A paint of spring/summer/autumn/winter field."

spring summer autumn winter

Edit Specific Thing by Text-Grounding and Segment-Anything

Editing by Text-guided Part Mask

Text Grounding: "dog head"

Human Prompt: "cute dog" p

More demos.

Text Grounding: "cat eye"

Human Prompt: "A cute small humanoid cat" p

Editing by Text-guided Object Mask

Text Grounding: "bench"

Human Prompt: "bench" p

Edit Anything by Segment-Anything

Human Prompt: "esplendent sunset sky, red brick wall" p

More demos.

Human Prompt: "chairs by the lake, sunny day, spring" p

Generate Anything by Segment-Anything

BLIP2 Prompt: "a large white and red ferry" p (1:input image; 2: segmentation mask; 3-8: generated images.)

More demos.

BLIP2 Prompt: "a cloudy sky" p

BLIP2 Prompt: "a black drone flying in the blue sky" p

  1. The human prompt and BLIP2 generated prompt build the text instruction.
  2. The SAM model segment the input image to generate segmentation mask without category.
  3. The segmentation mask and text instruction guide the image generation.

Generate semantic labels for each SAM mask.

p

python sam2semantic.py

Highlight features:

  • Pretrained ControlNet with SAM mask as condition enables the image generation with fine-grained control.
  • category-unrelated SAM mask enables more forms of editing and generation.
  • BLIP2 text generation enables text guidance-free control.

Setup

Create a environment

    conda env create -f environment.yaml
    conda activate control

Install BLIP2 and SAM

Put these models in models folder.

# BLIP2 and SAM will be audo installed by running app.py
pip install git+https://github.com/huggingface/transformers.git

pip install git+https://github.com/facebookresearch/segment-anything.git

# For text-guided editing
pip install git+https://github.com/openai/CLIP.git

pip install git+https://github.com/facebookresearch/detectron2.git

pip install git+https://github.com/IDEA-Research/GroundingDINO.git

Download pretrained model

# Segment-anything ViT-H SAM model will be auto downloaded. 

# BLIP2 model will be auto downloaded.

# Part Grounding Swin-Base Model.
wget https://github.com/Cheems-Seminar/segment-anything-and-name-it/releases/download/v1.0/swinbase_part_0a0000.pth

# Grounding DINO Model.
wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha2/groundingdino_swinb_cogcoor.pth

# Get pretrained model from huggingface. 
# No need to download this! But please install safetensors for reading the ckpt.

Run Demo

python app.py
# or
python editany.py
# or
python sam2image.py
# or
python sam2vlpart_edit.py
# or
python sam2groundingdino_edit.py

Model Zoo

Model Features Download Path
SAM Pretrained(v0-1) Good Nature Sense shgao/edit-anything-v0-1-1
LAION Pretrained(v0-3) Good Face shgao/edit-anything-v0-3
LAION Pretrained(v0-4) Support StableDiffusion 1.5/2.1, More training data and iterations, Good Face shgao/edit-anything-v0-4-sd15 shgao/edit-anything-v0-4-sd21

Training

  1. Generate training dataset with dataset_build.py.
  2. Transfer stable-diffusion model with tool_add_control_sd21.py.
  3. Train model with sam_train_sd21.py.

Acknowledgement

@InProceedings{gao2023editanything,
  author = {Gao, Shanghua and Lin, Zhijie and Xie, Xingyu and Zhou, Pan and Cheng, Ming-Ming and Yan, Shuicheng},
  title = {EditAnything: Empowering Unparalleled Flexibility in Image Editing and Generation},
  booktitle = {Proceedings of the 31st ACM International Conference on Multimedia, Demo track},
  year = {2023},
}

This project is based on:

Segment Anything, ControlNet, BLIP2, MDT, Stable Diffusion, Large-scale Unsupervised Semantic Segmentation, Grounded Segment Anything: From Objects to Parts, Grounded-Segment-Anything

Thanks for these amazing projects!

More Repositories

1

poolformer

PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)
Jupyter Notebook
1,243
star
2

envpool

C++-based high-performance parallel environment execution engine (vectorized env) for general RL environments.
C++
1,017
star
3

volo

VOLO: Vision Outlooker for Visual Recognition
Jupyter Notebook
911
star
4

Adan

Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
Python
714
star
5

MDT

Masked Diffusion Transformer is the SOTA for image synthesis. (ICCV 2023)
Python
384
star
6

lorahub

The official repository of paper "LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition".
Python
380
star
7

metaformer

MetaFormer Baselines for Vision (TPAMI 2024)
Jupyter Notebook
360
star
8

mvp

NeurIPS-2021: Direct Multi-view Multi-person 3D Human Pose Estimation
Python
307
star
9

CLoT

Official Codebase of our Paper: "Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation" (CVPR 2024)
Python
235
star
10

iFormer

iFormer: Inception Transformer
Python
226
star
11

inceptionnext

InceptionNeXt: When Inception Meets ConvNeXt (CVPR 2024)
Python
193
star
12

ptp

[CVPR2023] The code for γ€ŠPosition-guided Text Prompt for Vision-Language Pre-training》
Python
142
star
13

BindDiffusion

BindDiffusion: One Diffusion Model to Bind Them All
Python
140
star
14

FDM

The official PyTorch implementation of Fast Diffusion Model
Python
83
star
15

mugs

A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".
Python
78
star
16

symbolic-instruction-tuning

The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".
Python
58
star
17

ScaleLong

The official repository of paper "ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection" (NeurIPS 2023)
Python
46
star
18

VGT

Video Graph Transformer for Video Question Answering (ECCV'22)
Python
40
star
19

Agent-Smith

Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast
Python
40
star
20

jax_xc

Exchange correlation functionals translated from libxc to jax
Python
39
star
21

ILD

Imitation Learning via Differentiable Physics
Python
33
star
22

Consistent3D

The official PyTorch implementation of Consistent3D (CVPR 2024)
Python
33
star
23

edp

[NeurIPS 2023] Efficient Diffusion Policy
Python
32
star
24

GP-Nerf

Official implementation for GP-NeRF (ECCV 2022)
Python
31
star
25

rosmo

Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023
Python
26
star
26

d4ft

A JAX library for Density Functional Theory.
Python
25
star
27

hloenv

an environment based on XLA for deep learning compiler optimization research.
C++
23
star
28

dualformer

Python
23
star
29

MMCBench

Python
22
star
30

optim4rl

Optim4RL is a Jax framework of learning to optimize for reinforcement learning.
Python
21
star
31

DiffMemorize

On Memorization in Diffusion Models
Python
19
star
32

finetune-fair-diffusion

Code of the paper: Finetuning Text-to-Image Diffusion Models for Fairness
Python
19
star
33

GDPO

Graph Diffusion Policy Optimization
Python
18
star
34

TEC

Python
15
star
35

offbench

Python
11
star
36

OPER

code for the paper Offline Prioritized Experience Replay
Jupyter Notebook
11
star
37

PatchAIL

Implementation of PatchAIL in the ICLR 2023 paper <Visual Imitation with Patch Rewards>
Python
11
star
38

numcc

NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF
Python
9
star
39

sdft

The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".
Python
6
star
40

win

Python
4
star
41

SLRLA-optimizer

Python
2
star
42

MISA

[NeurIPS 2023] Mutual Information Regularized Offline Reinforcement Learning
Python
1
star