In this paper we present a new computer vision task, named video instance segmentation. The goal of this new task is simultaneous detection, segmentation and tracking of instances in videos. In words, it is the first time that the image instance segmentation problem is extended to the video domain. To facilitate research on this new task, we propose a large-scale benchmark called YouTube-VIS, which consists of 2883 high-resolution YouTube videos, a 40-category label set and 131k high-quality instance masks. In addition, we propose a novel algorithm called MaskTrack R-CNN for this task. Our new method introduces a new tracking branch to Mask R-CNN to jointly perform the detection, segmentation and tracking tasks simultaneously. Finally, we evaluate the proposed method and several strong baselines on our new dataset. Experimental results clearly demonstrate the advantages of the proposed algorithm and reveal insight for future improvement. We believe the video instance segmentation task will motivate the community along the line of research for video understanding.
@inproceedings{yang2019video,
title={Video instance segmentation},
author={Yang, Linjie and Fan, Yuchen and Xu, Ning},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5188--5197},
year={2019}
}
As mentioned in Issues #6 in MaskTrack R-CNN, the result is kind of unstable for different trials, which ranges from 28 AP to 31 AP when using R-50-FPN as backbone. The checkpoint provided below is the best one from two experiments.
Method | Base detector | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | AP | Config | Download |
---|---|---|---|---|---|---|---|---|---|
MaskTrack R-CNN | Mask R-CNN | R-50-FPN | pytorch | 12e | 1.61 | - | 30.2 | config | model | log |
MaskTrack R-CNN | Mask R-CNN | R-101-FPN | pytorch | 12e | 2.27 | - | 32.2 | config | model | log |
MaskTrack R-CNN | Mask R-CNN | X-101-FPN | pytorch | 12e | 3.69 | - | 34.7 | config | model | log |
The checkpoint provided below is the best one from two experiments.
Method | Base detector | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | AP | Config | Download |
---|---|---|---|---|---|---|---|---|---|
MaskTrack R-CNN | Mask R-CNN | R-50-FPN | pytorch | 12e | 1.61 | - | 28.7 | config | model | log |
MaskTrack R-CNN | Mask R-CNN | R-101-FPN | pytorch | 12e | 2.27 | - | 31.3 | config | model | log |
MaskTrack R-CNN | Mask R-CNN | X-101-FPN | pytorch | 12e | 3.69 | - | 33.5 | config | model | log |
Tracking Development Environment Setup can refer to this document.
Tracking Dataset Prepare can refer to this document.
Due to the influence of parameters such as learning rate in default configuration file, we recommend using 8 GPUs for training in order to reproduce accuracy. You can use the following command to start the training.
# Training MaskTrack R-CNN on YouTube-VIS-2021 dataset with following command.
# The number after config file represents the number of GPUs used. Here we use 8 GPUs.
bash tools/dist_train.sh configs/masktrack_rcnn/masktrack-rcnn_mask-rcnn_r50_fpn_8xb1-12e_youtubevis2021.py 8
If you want to know about more detailed usage of train.py/dist_train.sh/slurm_train.sh
,
please refer to this document.
If you want to get the results of the YouTube-VOS val/test set, please use the following command to generate result files that can be used for submission. It will be stored in ./youtube_vis_results.submission_file.zip
, you can modify the saved path in test_evaluator
of the config.
# The number after config file represents the number of GPUs used.
bash tools/dist_test_tracking.sh configs/masktrack_rcnn/masktrack-rcnn_mask-rcnn_r50_fpn_8xb1-12e_youtubevis2021.py 8 --checkpoint ${CHECKPOINT_PATH}
If you want to know about more detailed usage of train.py/dist_train.sh/slurm_train.sh
,
please refer to this document.
Use a single GPU to predict a video and save it as a video.
python demo/mot_demo.py demo/demo_mot.mp4 configs/masktrack_rcnn/masktrack-rcnn_mask-rcnn_r50_fpn_8xb1-12e_youtubevis2021.py --checkpoint {CHECKPOINT_PATH} --out vis.mp4
If you want to know about more detailed usage of mot_demo.py
, please refer to this document.