Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching
Yang Liu1*, ย Muzhi Zhu1*, ย Hengtao Li1*, ย Hao Chen1, ย Xinlong Wang2, ย Chunhua Shen1
1Zhejiang University, ย 2Beijing Academy of Artificial Intelligence
๐ Overview
๐ Description
Powered by large-scale pre-training, vision foundation models exhibit significant potential in open-world image understanding. Even though individual models have limited capabilities, combining multiple such models properly can lead to positive synergies and unleash their full potential. In this work, we present Matcher, which segments anything with one shot by integrating an all-purpose feature extraction model and a class-agnostic segmentation model. Naively connecting the models results in unsatisfying performance, e.g., the models tend to generate matching outliers and false-positive mask fragments. To address these issues, we design a bidirectional matching strategy for accurate cross-image semantic dense matching and a robust prompt sampler for mask proposal generation. In addition, we propose a novel instance-level matching strategy for controllable mask merging. The proposed Matcher method delivers impressive generalization performance across various segmentation tasks, all without training. For example, it achieves 52.7% mIoU on COCO-20i for one-shot semantic segmentation, surpassing the state-of-the-art specialist model by 1.6%. In addition, our visualization results show open-world generality and flexibility on images in the wild.
๐๏ธ TODO
- Online Demo
- Release code and models
๐ผ๏ธ Demo
One-Shot Semantic Segmantation
One-Shot Object Part Segmantation
Cross-Style Object and Object Part Segmentation
Controllable Mask Output
Video Object Segmentation
vos_demo.mp4
๐ซ License
The content of this project itself is licensed under LICENSE.
๐๏ธ Citation
If you find this project useful in your research, please consider cite:
@article{liu2023matcher,
title={Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching},
author={Liu, Yang and Zhu, Muzhi and Li, Hengtao and Chen, Hao and Wang, Xinlong and Shen, Chunhua},
journal={arXiv preprint arXiv:2305.13310},
year={2023}
}