• Stars
    star
    114
  • Rank 308,031 (Top 7 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created about 1 year ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Official implementation of SAM-Med2D

SAM-Med2D [Paper]

Open in OpenXLab Open In Colab GitHub Stars🔥🔥🔥

🌤️ Highlights

  • 🏆 Collected and curated the largest medical image segmentation dataset (4.6M images and 19.7M masks) to date for training models.
  • 🏆 The most comprehensive fine-tuning based on Segment Anything Model (SAM).
  • 🏆 Comprehensive evaluation of SAM-Med2D on large-scale datasets.

🔥 Updates

  • (2023.09.14) Train code release
  • (2023.09.02) Test code release
  • (2023.08.31) Pre-trained model release
  • (2023.08.31) Paper release
  • (2023.08.26) Online Demo release

👉 Dataset

SAM-Med2D is trained and tested on a dataset that includes 4.6M images and 19.7M masks. This dataset covers 10 medical data modalities, 4 anatomical structures + lesions, and 31 major human organs. To our knowledge, this is currently the largest and most diverse medical image segmentation dataset in terms of quantity and coverage of categories.

image

👉 Framework

The pipeline of SAM-Med2D. We freeze the image encoder and incorporate learnable adapter layers in each Transformer block to acquire domain-specific knowledge in the medical field. We fine-tune the prompt encoder using point, Bbox, and mask information, while updating the parameters of the mask decoder through interactive training.

image

👉 Results

Quantitative comparison of different methods on the test set:
Model Resolution Bbox (%) 1 pt (%) 3 pts (%) 5 pts (%) FPS Checkpoint
SAM $256\times256$ 61.63 18.94 28.28 37.47 51 Offical
SAM $1024\times1024$ 74.49 36.88 42.00 47.57 8 Offical
FT-SAM $256\times256$ 73.56 60.11 70.95 75.51 51 FT-SAM
SAM-Med2D $256\times256$ 79.30 70.01 76.35 78.68 35 SAM-Med2D
Generalization validation on 9 MICCAI2023 datasets, where "*" denotes that we drop adapter layer of SAM-Med2D in test phase:
Datasets Bbox prompt (%) 1 point prompt (%)
SAM SAM-Med2D SAM-Med2D* SAM SAM-Med2D SAM-Med2D*
CrossMoDA23 78.98 70.51 84.62 18.49 46.08 73.98
KiTS23 84.80 76.32 87.93 38.93 48.81 79.87
FLARE23 86.11 83.51 90.95 51.05 62.86 85.10
ATLAS2023 82.98 73.70 86.56 46.89 34.72 70.42
SEG2023 75.98 68.02 84.31 11.75 48.05 69.85
LNQ2023 72.31 63.84 81.33 3.81 44.81 59.84
CAS2023 52.34 46.11 60.38 0.45 28.79 15.19
TDSC-ABUS2023 71.66 64.65 76.65 12.11 35.99 61.84
ToothFairy2023 65.86 57.45 75.29 1.01 32.12 47.32
Weighted sum 85.35 81.93 90.12 48.08 60.31 83.41

👉 Visualization

image

👉 Train

Prepare your own dataset and refer to the samples in SAM-Med2D/data_demo to replace them according to your specific scenario. You need to generate the image2label_train.json file before running train.py.

If you want to use mixed-precision training, please install Apex. If you don't want to install Apex, you can comment out the line from apex import amp and set use_amp to False.

cd ./SAM-Med2d
python train.py
  • work_dir: Specifies the working directory for the training process. Default value is workdir.
  • image_size: Default value is 256.
  • mask_num: Specify the number of masks corresponding to one image, with a default value of 5.
  • data_path: Dataset directory, for example: data_demo.
  • resume: Pretrained weight file, ignore sam_checkpoint if present.
  • sam_checkpoint: Load sam checkpoint.
  • iter_point: Mask decoder iterative runs.
  • multimask: Determines whether to output multiple masks. Default value is True.
  • encoder_adapter: Whether to fine-tune the Adapter layer, set to False only for fine-tuning the decoder.
  • use_amp: Set whether to use mixed-precision training.

👉 Test

Prepare your own dataset and refer to the samples in SAM-Med2D/data_demo to replace them according to your specific scenario. You need to generate the label2image_test.json file before running test.py.

cd ./SAM-Med2d
python test.py
  • work_dir: Specifies the working directory for the testing process. Default value is workdir.
  • batch_size: 1.
  • image_size: Default value is 256.
  • boxes_prompt: Use Bbox prompt to get segmentation results.
  • point_num: Specifies the number of points. Default value is 1.
  • iter_point: Specifies the number of iterations for point prompts.
  • sam_checkpoint: Load sam or sammed checkpoint.
  • encoder_adapter: Set to True if using SAM-Med2D's pretrained weights.
  • save_pred: Whether to save the prediction results.
  • prompt_path: Is there a fixed Prompt file? If not, the value is None, and it will be automatically generated in the latest prediction.

👉 Deploy

Export to ONNX

  • export encoder model
python3 scripts/export_onnx_encoder_model.py --sam_checkpoint /path/to/sam-med2d_b.pth --output /path/to/sam-med2d_b.encoder.onnx --model-type vit_b --image_size 256 --encoder_adapter True
  • export decoder model
python3 scripts/export_onnx_model.py --checkpoint /path/to/sam-med2d_b.pth --output /path/to/sam-med2d_b.decoder.onnx --model-type vit_b --return-single-mask
  • inference with onnxruntime
# cd examples/SAM-Med2D-onnxruntime
python3 main.py --encoder_model /path/to/sam-med2d_b.encoder.onnx --decoder_model /path/to/sam-med2d_b.decoder.onnx

🚀 Try SAM-Med2D

  • 🏆 Gradio Online: Online Demo can be found on OpenXLab.
  • 🏆 Notebook Demo: You can use predictor_example.ipynb to run it locally to view the prediction results generated by different prompts.
  • 🏆 Gradio Local: You can deploy app.ipynb locally and upload test cases.
  • Notes: Welcome to feedback good case👍 and bad case👎 in issue.

🗓️ Ongoing

  • Dataset release
  • Train code release
  • Test code release
  • Pre-trained model release
  • Paper release
  • Online Demo release

🎫 License

This project is released under the Apache 2.0 license.

💬 Discussion Group

If you have any questions about SAM-Med2D, please add GV Assistant to the WeChat group discussion:

image

🤝 Acknowledgement

  • We thank all medical workers and dataset owners for making public datasets available to the community.
  • Thanks to the open-source of the following projects: Segment Anything

👋 Hiring & Global Collaboration

  • Hiring: We are hiring researchers, engineers, and interns in General Vision Group, Shanghai AI Lab. If you are interested in Medical Foundation Models and General Medical AI, including designing benchmark datasets, general models, evaluation systems, and efficient tools, please contact us.
  • Global Collaboration: We're on a mission to redefine medical research, aiming for a more universally adaptable model. Our passionate team is delving into foundational healthcare models, promoting the development of the medical community. Collaborate with us to increase competitiveness, reduce risk, and expand markets.
  • Contact: Junjun He([email protected]), Jin Ye([email protected]), and Tianbin Li ([email protected]).

Reference

@misc{cheng2023sammed2d,
      title={SAM-Med2D}, 
      author={Junlong Cheng and Jin Ye and Zhongying Deng and Jianpin Chen and Tianbin Li and Haoyu Wang and Yanzhou Su and
              Ziyan Huang and Jilong Chen and Lei Jiangand Hui Sun and Junjun He and Shaoting Zhang and Min Zhu and Yu Qiao},
      year={2023},
      eprint={2308.16184},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

More Repositories

1

InternVL

[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
Python
5,753
star
2

LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Python
5,717
star
3

DragGAN

Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功能实现,在线Demo,本地部署试用,代码、模型已全部开源,支持Windows, macOS, Linux)
Python
4,996
star
4

InternGPT

InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
Python
3,198
star
5

Ask-Anything

[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Python
2,984
star
6

InternImage

[CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
Python
2,502
star
7

InternVideo

[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Python
1,392
star
8

VisionLLM

VisionLLM Series
Python
874
star
9

VideoMamba

[ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding
Python
787
star
10

OmniQuant

[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.
Python
691
star
11

VideoMAEv2

[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
Python
486
star
12

DCNv4

[CVPR 2024] Deformable Convolution v4
Python
463
star
13

all-seeing

[ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of the Open World"
Python
452
star
14

GITM

Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory
445
star
15

Multi-Modality-Arena

Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
Python
428
star
16

Vision-RWKV

Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures
Python
352
star
17

CaFo

[CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners
Python
344
star
18

PonderV2

PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm
Python
311
star
19

LAMM

[NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents
Python
296
star
20

UniFormerV2

[ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer
Python
280
star
21

unmasked_teacher

[ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models
Python
276
star
22

OmniCorpus

OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text
Python
259
star
23

HumanBench

This repo is official implementation of HumanBench (CVPR2023)
Python
231
star
24

Instruct2Act

Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
Python
223
star
25

EfficientQAT

EfficientQAT: Efficient Quantization-Aware Training for Large Language Models
Python
198
star
26

gv-benchmark

General Vision Benchmark, GV-B, a project from OpenGVLab
Python
189
star
27

ControlLLM

ControlLLM: Augment Language Models with Tools by Searching on Graphs
Python
181
star
28

InternVideo2

152
star
29

UniHCP

Official PyTorch implementation of UniHCP
Python
149
star
30

efficient-video-recognition

Python
114
star
31

EgoVideo

[CVPR 2024 Champions] Solutions for EgoVis Chanllenges in CVPR 2024
Jupyter Notebook
103
star
32

DiffRate

[ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging techniques, while incorporating a differentiable compression rate.
Jupyter Notebook
86
star
33

MMT-Bench

ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Python
85
star
34

Awesome-DragGAN

Awesome-DragGAN: A curated list of papers, tutorials, repositories related to DragGAN
75
star
35

MM-NIAH

This is the official implementation of the paper "Needle In A Multimodal Haystack"
Python
70
star
36

M3I-Pretraining

69
star
37

STM-Evaluation

Python
69
star
38

MUTR

[AAAI 2024] Referred by Multi-Modality: A Unified Temporal Transformers for Video Object Segmentation
Python
65
star
39

LCL

Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning
Python
63
star
40

ChartAst

ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.
Python
60
star
41

LORIS

Long-Term Rhythmic Video Soundtracker, ICML2023
Python
54
star
42

DDPS

Official Implementation of "Denoising Diffusion Semantic Segmentation with Mask Prior Modeling"
Python
53
star
43

Awesome-LLM4Tool

A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools
52
star
44

PIIP

NeurIPS 2024 Spotlight ⭐️ Parameter-Inverted Image Pyramid Networks (PIIP)
Python
51
star
45

InternVL-MMDetSeg

Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed
Jupyter Notebook
50
star
46

GUI-Odyssey

GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos.
Python
47
star
47

Siamese-Image-Modeling

[CVPR 2023]Implementation of Siamese Image Modeling for Self-Supervised Vision Representation Learning
Python
33
star
48

De-focus-Attention-Networks

Learning 1D Causal Visual Representation with De-focus Attention Networks
Python
28
star
49

Multitask-Model-Selector

Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector
Python
27
star
50

Official-ConvMAE-Det

Python
13
star
51

perception_test_iccv2023

Champion Solutions repository for Perception Test challenges in ICCV2023 workshop.
Python
13
star
52

opengvlab.github.io

12
star
53

MovieMind

9
star
54

EmbodiedGPT

5
star
55

DriveMLM

3
star
56

.github

2
star