SegPrompt: Boosting Open-world Segmentation via Category-level Prompt Learning
Muzhi Zhu1, ย Hengtao Li1, ย Hao Chen1, ย Chengxiang Fan1, ย Weian Mao2,1, ย Chenchen Jing1, ย Yifan Liu2, ย Chunhua Shen1
1Zhejiang University, ย 2The University of Adelaide, ย
News
- [2023/07/14] Our work SegPrompt is accepted by Int. Conf. Computer Vision (ICCV) 2023! ๐๐๐
- [2023/08/30] We release our new benchmark LVIS-OW.
Installation
Please follow the instructions in Mask2Former
Other requirements
pip install torchshow
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.10.1+cu113.html
pip install lvis
pip install setuptools==59.5.0
pip install seaborn
LVIS-OW benchmark
Here we provide our proposed new benchmark LVIS-OW.
Dataset preparation
First prepare COCO and LVIS dataset, place them under $DETECTRON2_DATASETS following Detectron2
The dataset structure is as follows:
datasets/
coco/
annotations/
instances_{train,val}2017.json
{train,val}2017/
lvis/
lvis_v1_{train,val}.json
We reorganize the dataset and divide the categories into Known-Seen-Unseen to better evaluate the open-world model. The json files can be downloaded from here.
Or you can directly use the command to generate from the json file of COCO and LVIS.
bash tools/prepare_lvisow.sh
After you successfully get lvis_v1_train_ow.json and lvis_v1_val_resplit_r.json, you can refer to here to register the training set and test set. Then you can use our benchmark for training and testing.
Evaluation on LVIS-OW
python tools/eval_lvis_ow.py --dt-json-file output/m2f_binary_lvis_ow/lvis_r/inference/lvis_instances_results.json
Acknowledgement
We thank the following repos for their great works:
Cite our Paper
If you found this project useful for your paper, please kindly cite our paper.