• Stars
    star
    945
  • Rank 48,016 (Top 1.0 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 1 year ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

SAM-PT: Extending SAM to zero-shot video segmentation with point-based tracking.

Segment Anything Meets Point Tracking

Segment Anything Meets Point Tracking
Frano Rajič, Lei Ke, Yu-Wing Tai, Chi-Keung Tang, Martin Danelljan, Fisher Yu
ETH Zürich, HKUST, EPFL

SAM-PT design

We propose SAM-PT, an extension of the Segment Anything Model (SAM) for zero-shot video segmentation. Our work offers a simple yet effective point-based perspective in video object segmentation research. For more details, refer to our paper. Our code, models, and evaluation tools will be released soon. Stay tuned!

Interactive Video Segmentation Demo

Annotators only provide a few points to denote the target object at the first video frame to get video segmentation results. Please visit our project page for more visualizations, including qualitative results on DAVIS 2017 videos and more Avatar clips.

street bees avatar horsejump-high

Documentation

Explore our step-by-step guides to get up and running:

  1. Getting Started: Learn how to set up your environment and run the demo.
  2. Prepare Datasets: Instructions on acquiring and prepping necessary datasets.
  3. Prepare Checkpoints: Steps to fetch model checkpoints.
  4. Running Experiments: Details on how to execute experiments.

Semi-supervised Video Object Segmentation (VOS)

SAM-PT is the first method to utilize sparse point tracking combined with SAM for video segmentation. With such compact mask representation, we achieve the highest $\mathcal{J\&F}$ scores on the DAVIS 2016 and 2017 validation subsets among methods that do not utilize any video segmentation data during training.

table-1

Quantitative results in semi-supervised VOS on the validation subsets of DAVIS 2017, YouTube-VOS 2018, and MOSE 2023:

table-3 table-4 table-5

Video Instance Segmentation (VIS)

On the validation split of UVO VideoDenseSet v1.0, SAM-PT outperforms TAM even though the former was not trained on any video segmentation data. TAM is a concurrent approach combining SAM and XMem, where XMem was pre-trained on BL30K and trained on DAVIS and YouTube-VOS, but not on UVO. On the other hand, SAM-PT combines SAM with the PIPS point tracking method, both of which have not been trained on any video segmentation tasks.

table-6

Acknowledgments

We want to thank SAM, PIPS, HQ-SAM, MobileSAM, XMem, and Mask2Former for publicly releasing their code and pretrained models.

Citation

If you find SAM-PT useful in your research or if you refer to the results mentioned in our work, please star this repository and consider citing 📝:

@article{sam-pt,
  title   = {Segment Anything Meets Point Tracking},
  author  = {Rajič, Frano and Ke, Lei and Tai, Yu-Wing and Tang, Chi-Keung and Danelljan, Martin and Yu, Fisher},
  journal = {arXiv:2307.01197},
  year    = {2023}
}

More Repositories

1

sam-hq

Segment Anything in High Quality [NeurIPS 2023]
Python
3,569
star
2

transfiner

Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022
Python
525
star
3

qd-3dt

Official implementation of Monocular Quasi-Dense 3D Object Tracking, TPAMI 2022
Python
511
star
4

qdtrack

Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)
Python
380
star
5

pcan

Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation, NeurIPS 2021 Spotlight
Python
364
star
6

MaskFreeVIS

Mask-Free Video Instance Segmentation [CVPR 2023]
Python
356
star
7

bdd100k-models

Model Zoo of BDD100K Dataset
Python
285
star
8

idisc

iDisc: Internal Discretization for Monocular Depth Estimation [CVPR 2023]
Python
279
star
9

LiDAR_snow_sim

LiDAR snowfall simulation
Python
172
star
10

r3d3

Python
144
star
11

P3Depth

Python
123
star
12

shift-dev

SHIFT Dataset DevKit - CVPR2022
Python
103
star
13

cascade-detr

[ICCV'23] Cascade-DETR: Delving into High-Quality Universal Object Detection
Python
92
star
14

tet

Implementation of Tracking Every Thing in the Wild, ECCV 2022
Python
69
star
15

TrafficBots

TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction. ICRA 2023. Code is now available at https://github.com/zhejz/TrafficBots
51
star
16

nutsh

A Platform for Visual Learning from Human Feedback
TypeScript
42
star
17

vmt

Video Mask Transfiner for High-Quality Video Instance Segmentation (ECCV'2022)
Jupyter Notebook
29
star
18

spc2

Instance-Aware Predictive Navigation in Multi-Agent Environments, ICRA 2021
Python
20
star
19

CISS

Unsupervised condition-level adaptation for semantic segmentation
Python
20
star
20

shift-detection-tta

This repository implements continuous test-time adaptation algorithms for object detection on the SHIFT dataset.
Python
18
star
21

vis4d

A modular library for visual 4D scene understanding
Python
17
star
22

dla-afa

Official implementation of Dense Prediction with Attentive Feature Aggregation, WACV 2023
Python
12
star
23

project-template

Python
4
star
24

vis4d_cuda_ops

Cuda
3
star
25

vis4d-template

Vis4D Template.
Shell
3
star
26

soccer-player

Python
2
star