Maggie: Masked Guided Gradual Human Instance Matting
Maggie: Masked Guided Gradual Human Instance Matting
Maggie: Masked Guided Gradual Human Instance Matting
Abstract
1
works [17, 32, 34, 45, 53] constrain the temporal consis- the lower outputs without adding fine-grained details. We
tency at feature maps between frames. Since the alpha matte propose an instance guidance method to help the coarser
values are very sensitive, feature-level aggregation is not an prediction guide but not contribute to the finer alpha matte.
absolute guarantee of the problem. Some methods [21, 50] In addition to the framework design, we propose a
in video segmentation and matting compute the incoher- new training video dataset and benchmarks for instance-
ent regions to update values across frames. We propose a awareness matting. Besides the new large-scale high-
temporal consistency module that works in both feature and quality synthesized image instance matting, an extension of
output spaces to produce consistent alpha mattes. the current instance image matting benchmark adds more
Instance matting [49] is an extension of the matting prob- robustness with different guidance quality. For video in-
lem where there exists multiple αi , i ∈ 0..N , and each be- put, our synthesized training and benchmark are constructed
longs to one foreground instance. This problem creates an- from various public instance-agnostic datasets with three
otherPconstraint for each spatial location (x, y) value such levels of difficulty.
that i αi (x, y) = 1. The main prior work InstMatt [49] In summary, our contributions include:
handles the multi-instance images by predicting each alpha • A highly efficient instance matting framework with mask
matte separately from binary guided masks before the in- guidance that has all instances interacting and processed
stance refinement at the end. Although this approach pro- in a single forward pass.
duces impressive results in both synthesized and natural im- • A novel approach that considers feature-matte levels to
age benchmarks, the efficiency and accuracy of this model maintain matte temporal consistency in videos.
are unexplored in video processing. The separated predic- • Diverse training datasets and robust benchmarks for im-
tion for each instance yields inefficiency in the architecture, age and video instance matting that bridge the gap be-
which makes it costly to adapt to video input. Another con- tween synthesized and natural cases.
current work [30] with ours extends the InstMatt to process
video input, but the complexity and efficiency of the net- 2. Related Works
work are unexplored. Fig. 1 illustrates the comparison be-
tween our MaGGIe and InstMatt when working with video. There are many ways to categorize matting methods, here
Our work improves not only the accuracy but also the con- we revise previous works based on their primary input
sistency between frames when errors occur in guidance. types. The brief comparison of others and our MaGGIe is
shown in Table 1.
Besides the temporal consistency, when extending the
instance matting to videos containing a large number of Image Matting. Traditional matting methods [4, 24, 25]
frames and instances, the careful network design to pre- rely on color sampling to estimate foreground and back-
vent the explosion in the computational cost is also a key ground, often resulting in noisy outcomes due to lim-
challenge. In this work, we propose several adjustments to ited high-level object features. Advanced deep learning-
the popular mask-guided progressive refinement architec- based methods [9, 11, 31, 37, 46, 47, 54] have signifi-
ture [56]. Firstly, by using the mask guidance embedding cantly improved results by integrating image and trimap in-
inspired by AOT [55], the input size reduces to a constant puts or focusing on high-level and detailed feature learn-
number of channels. Secondly, with the advancement of ing. However, these methods often struggle with trimap
transformer attention in various vision tasks [40–42], we inaccuracies and assume single-object scenarios. Recent
inherit the query-based instance segmentation [7, 19, 23] to approaches [5, 6, 22] require only image inputs but face
predict instance mattes in one forward pass instead of sep- challenges with multiple salient objects. MGM [56] and
arated estimation. It also replaces the complex refinement its extension MGM-in-the-wild [39] introduce binary mask-
in previous work with the interaction between instances by based matting, addressing multi-salient object issues and
attention mechanism. To save the high cost of transformer reducing trimap dependency. InstMatt [49] further cus-
attention, we only perform multi-instance prediction at the tomizes this approach for multi-instance scenarios with a
coarse level and adapt the progressive refinement at mul- complex refinement algorithm. Our work extends these de-
tiple scales [18, 56]. However, using full convolution for velopments, focusing on efficient, end-to-end instance mat-
the refinement as previous works are inefficient as less than ting with binary mask guidance. Image matting also bene-
10% of values are updated at each scale, which is also men- fits from diverse datasets [22, 26, 27, 29, 33, 50, 54], sup-
tioned in [50]. The replacement of sparse convolution [36] plemented by background augmentation from sources like
saves the inference cost significantly, keeping the constant BG20K [29] or COCO [35]. Our work also leverages cur-
complexity of the algorithm since only interested locations rently available datasets to concretize a robust benchmark
are refined. Nevertheless, the lack of information at a larger for human-masked guided instance matting.
scale when using sparse convolution can cause a dominance Video Matting. Temporal consistency is a key chal-
problem, which leads to the higher-scale prediction copying lenge in video matting. Trimap-propagation methods [17,
2
Table 1. Comparing MaGGIe with previous works in image Input Construction. The input I′ ∈ RT ×(3+Ce )×H×W to
and video matting. Our work is the first instance-aware frame- our model is the concatenation of input image I′ and guid-
work producing alpha matte from a binary mask with both feature ance embedding E ∈ RT ×Ce ×H×W constructed from M
and output temporal consistency in constant processing time.
by ID Embedding layer [55]. More details about transform-
ing M to E are in the supplementary material.
Instance Temp. aggre. Time
Method Avenue Guidance
-awareness Feat. Matte. complexity Image Features Extraction. We extract features map Fs ∈
MGM [39, 56] CVPR21+23 Mask O(n) RT ×Cs ×H/s×W/s from I′ by feature-pyramid networks. As
InstMatt [49] CVPR22 Mask ✓ O(n) shown in the left part of Fig. 2, there are four scales s =
TCVOM [57] MM21 - - ✓ - 1, 2, 4, 8 for our coarse-to-fine matting pipeline.
OTVM [45] ECCV22 1st trimap ✓ O(n) Coarse instance alpha mattes prediction. Our MaGGIe
FTP-VM [17] CVPR23 1st trimap ✓ O(n) adopts transformer-style attention to predict instance mat-
SparseMatt [50] CVPR23 No ✓ O(n) tes at the coarsest features F8 . We revisit the scaled dot-
MaGGIe - Mask ✓ ✓ ✓ ≈ O(1) product attention mechanism in Transformers [51]. Given
queries Q ∈ RL×C , keys K ∈ RS×C , and values V ∈
45, 48] and background knowledge-based approaches like RS×C , the scaled dot-product attention is defined as:
BGMv2 [33] aim to reduce trimap dependency. Re-
cent techniques [28, 32, 34, 53, 57] incorporate Conv- \text {Attention}(\mathbf {Q}, \mathbf {K}, \mathbf {V}) = \text {softmax}\left (\frac {\mathbf {Q}\mathbf {K}^\top }{\sqrt {C}}\right )\mathbf {V}. (1)
GRU, attention memory matching, or transformer-based
architectures for temporal feature aggregation. Sparse-
Mat [50] uniquely focuses on fusing outputs for consis- In cross-attention (CA), Q and (K, V) originate from dif-
tency. Our approach builds on these foundations, combin- ferent sources, whereas in self-attention (SA), they share
ing feature and output fusion for enhanced temporal con- similar information.
sistency in alpha maps. There is a lack of video matting In our Instance Matte Decoder, the organization of CA
datasets due to the difficulty in data collecting. Video- and SA blocks inspired by SAM [23] is depicted in the
Matte240K [33] and VM108 [57] focus on composited bottom right of Fig. 2. The downscaled guidance masks
videos, while CRGNN [52] is the only offering natural M8 also participate as the additional embedding for image
videos for human matting. To address the gap in instance- features in attention procedures. The coarse alpha matte
aware video matting datasets, we propose adapting existing A8 is computed as the dot product between instance tokens
public datasets for training and evaluation, particularly for T = {Ti |1 ≤ i ≤ N } ∈ RN ×C8 and enriched feature map
human subjects. F̄8 with a sigmoid activation applied. Those components
are used in the following steps of matte detail refinement.
3. MaGGIe Progressive Detail Refinement. From the coarse instance
alpha matte, we leverage the Progressive Refinement [56]
We introduce our efficient instance matting framework
to improve the details at uncertain locations U = {up =
guided by instance binary masks, structured into two parts.
(x, y, t, i)|0 < A8 (up ) < 1} ∈ NP ×4 with some highly
The first Sec. 3.1 details our novel architecture to maintain
efficient modifications. It is mandatory to transform en-
accuracy and efficiency. The second Sec. 3.2 describes our
riched dense features F̄8 to instance-specific features X8
approach for ensuring temporal consistency across frames
for the instance-wise refinement. However, to save mem-
in video processing.
ory and computational costs, only transformed features at
3.1. Efficient Masked Guided Instance Matting uncertainty U are computed as:
Our framework, depicted in Fig. 2, processes images or \mathbf {X}_8(x, y, t, i) = \text {MLP}(\mathbf {\bar {F}}_8(x, y, t) \times \mathbf {T}_i). \label {eq:dense2sparse} (2)
video frames I ∈ [0, 255]T ×3×H×W with corresponding bi-
nary instance guidance masks M ∈ {0, 1}T ×N ×H×W , and To combine the coarser instance-specific sparse features
then predicts alpha mattes A ∈ [0, 1]T ×N ×H×W for each X8 with the finer image features F4 , we propose the In-
instance per frame. Here, T, N, H, W represent the num- stance Guidance (IG) module. As described in the top right
ber of frames, instances, and input resolution, respectively. of Fig. 2, this module firstly increases the spatial scale of
Each spatial-temporal location (x, y, t) in M is a one-hot X8 to have X′4 by an inverse sparse convolution. For each
vector {0, 1}N highlighting the instance it belongs to. The entry p, we compute a guidance score G ∈ [0, 1]C4 , which
pipeline comprises five stages: (1) Input construction; (2) is then channel-wise multiplied with F4 to produce detailed
Image features extraction; (3) Coarse instance alpha mat- sparse instance-specific features X4 :
tes prediction; (4) Progressive detail refinement; and (5)
Coarse-to-fine fusion. \mathbf {X}_4(p) = \mathcal {G}\lft (\lft \{\mathbf {X}'_4(p);\mathbf {F}_4(p)\rgt \}\rgt ) \times \mathbf {F}_4(p), (3)
3
Instance Guidance …
[# − %; # + %] Sparse Matte Head Fuse Inverse Sparse Convolution Sparse Convolution 3x3
4! ReLU
ID Embedding
Matte / "
Feature !
maps *." Feat2Tok CA
Features
Input mask %! Image feat #!
Hidden state +%&'&$ ', (
#*!
Temporal !
Hidden state +%&' , … +%(' Temporal Fusion
Assign Token SA Tok2Feat CA
Sparsity
Prediction ID Embedding !, ', ( Tokens
MLP
"
Forward/Backward MLP
…
discrepancy Δ Temporal-aware matte /)*+, Learnable ID vectors
[t − k; t + k] ', (
Tok2Feat CA
…
!
Initial instance tokens
concatenation Fuse Coarse-to-fine fusion. dot product * channel-wise multiplication x2 Predicted matte +!
Figure 2. Overall pipeline of MaGGIe. This framework processes frame sequences I and instance masks M to generate per-instance
alpha mattes A′ for each frame. It employs progressive refinement and sparse convolutions for accurate mattes in multi-instance scenarios,
optimizing computational efficiency. The subfigures on the right illustrate the Instance Matte Decoder and the Instance Guidance, where
we use mask guidance to predict coarse instance mattes and guide detail refinement by deep features, respectively. (Optimal in color and
zoomed view).
where {; } denotes concatenation along the feature dimen- k, ...t + k}, as shown in Fig. 2. For simplicity, we set k = 1
sion, and G is a series of sparse convolutions with sigmoid with an overlap of 2 frames. The initial hidden state H0
activation. is zeroed, and Ht−k−1 from the previous window aids the
The sparse features X4 is then aggregated with other current one. This module fuses the feature map at time t
dense features F2 , F1 respectively at corresponding indices with two consecutive frames, averaging forward and back-
to have X2 , X1 . At each scale, we predict alpha matte ward aggregations. The resultant temporal features are used
A4 , A1 with gradual detail improvement. You can find to predict the coarse alpha matte A8 .
more aggregation and sparse matting head details in the sup- Alpha Matte Temporal Consistency. We propose fusing
plementary material. frame mattes by predicting their temporal sparsity. Unlike
Coarse-to-fine fusion. This stage is to combine alpha mat- the previous method [50] using image processing kernels,
tes of different scales in a progressive way (PRM): A8 → we leverage deep features for this prediction. A shallow
A4 → A1 to obtain A. At each step, only values at uncer- convolutional network with a sigmoid activation processes
tain locations and belonging to unknown masks are refined. stacked feature maps F̄8 at t − 1 and t, outputting alpha
matte discrepancy between two frames ∆(t) ∈ {0, 1}H×W .
Training Losses. In addition to standard losses (L1 for re-
For each frame t, with ∆(t) and ∆(t + 1), we compute the
construction, Laplacian Llap for detail, Gradient Lgrad for
forward propagation Af and backward propagation Ab to
smoothness), we supervise the affinity score matrix Aff be-
reject the propagation at misalignment regions and obtain
tween instance tokens T (as Q) and image feature maps F
temporal aware output Atemp . The supplementary material
(as K, V) by the attention loss Latt . Additionally, our net-
provides more details about the implementation.
work’s progressive refinement process necessitates accurate
coarse-level predictions to determine U accurately. We as- Training Losses. Besides the dtSSD loss for tem-
sign customized weights W8 for losses at scale s = 8 to poral consistency, we introduce an L1 loss for
prioritize uncertain locations. More details about Latt and the alpha matte discrepancy. The loss compares
W8 is in the supplementary material. predicted ∆(t) with the ground truth ∆gt (t) =
maxi (|Agt (t − 1, i) − Agt (t, i)| > β), where β = 0.001
3.2. Feature-Matte Temporal Consistency to simplify the problem to binary pixel classification.
4
Table 3. Superiority of Mask Embedding Over Stacking in
HIM2K+M-HIM2K. Our mask embedding technique demon-
strates enhanced performance compared to traditional stacking
methods.
Composition Natural
Mask input
MAD Grad Conn MAD Grad Conn
Stacked 27.01 16.80 15.72 39.29 16.44 23.26
Figure 3. Variations of Masks for the Same Image in M- Embeded(Ce = 1) 19.18 13.00 11.16 33.60 13.44 19.18
HIM2K Dataset. Masks generated using R50-C4-3x, R50-FPN-
3x, R101-FPN-400e MaskRCNN models trained on COCO. (Op- Embeded(Ce = 2) 21.74 14.39 12.69 35.16 14.51 20.40
timal in color). Embeded(Ce = 3) 17.75 12.52 10.32 33.06 13.11 17.30
thetic and natural sets to assess the model’s robustness and Embeded(Ce = 5) 24.79 16.19 14.58 34.25 15.66 19.70
generalization.
OTVM [45]. In the first Sec. 5.1, we discuss the results
4.1. Image Instance Matting when pre-training on the image matting dataset. The per-
formance on the video dataset is shown in the Sec. 5.2. All
We derived the Image Human Instance Matting 50K (I-
training settings are reported in the supplementary material.
HIM50K) training dataset from HHM50K [50], featuring
multiple human subjects. This dataset includes 49,737 syn- 5.1. Pre-training on image data
thesized images with 2-5 instances each, created by com-
positing human foregrounds with random backgrounds and Metrics. Our evaluation metrics included Mean Absolute
modifying alpha mattes for guidance binary masks. For Differences (MAD), Mean Squared Error (MSE), Gradi-
benchmarking, we used HIM2K [49] and created the Mask ent (Grad), and Connectivity (Conn). We also separately
HIM2K (M-HIM2K) set to test robustness against varying computed these metrics for the foreground and unknown
mask qualities from available instance segmentation models regions, denoted as MADf and MADu , by estimating the
(as shown in Fig. 3). Details on the generation process are trimap on the ground truth. Since our images contain mul-
available in the supplementary material. tiple instances, metrics were calculated for each instance
individually and then averaged. We did not use the IMQ
4.2. Video Instance Matting from InstMatt, as our focus is not on instance detection.
Our video instance matte dataset, synthesized from Ablation studies. Each ablation study setting was trained
VM108 [57], VideoMatte240K [33], and CRGNN [52], in- for 10,000 iterations with a batch size 96. We first assessed
cludes subsets V-HIM2K5 for training and V-HIM60 for the performance of the embedding layer versus stacked
testing. We categorized the dataset into three difficulty lev- masks and image inputs in Table 3. The mean results on M-
els based on instance overlap. Table 2 shows some details HIM2K are reported, with full results in the supplementary
of the synthesized datasets. Masks in training involved di- material. The embedding layer showed improved perfor-
lation and erosion on binarized alpha mattes. For testing, mance, particularly effective with Ce = 3. We also evalu-
masks are generated using XMem [8]. Further details on ated the impact of using Latt and W8 in training in Table 4.
dataset synthesis and difficulty levels are provided in the Latt significantly enhanced model performance, while W8
supplementary material. provided a slight boost.
Quantitative results. We evaluated our model against
5. Experiments previous baselines after retraining them on our I-HIM50K
dataset. Besides original works, we modified SparseMat’s
We developed our model using PyTorch [20] and the Sparse
convolution library Spconv [10]. Our codebase is built upon Table 4. Optimal Performance with Latt and W8 on
the publicly available implementations of MGM [56] and HIM2K+M-HIM2K. Utilizing both Latt and W8 leads to su-
perior results.
Table 2. Details of Video Instance Matting Training and Test-
ing Sets. V-HIM2K5 for training and V-HIM60 for model evalua- Composition Natural
Latt W8
tion. Each video contains 30 frames. MAD Grad Conn MAD Grad Conn
5
Table 5. Comparative Performance on HIM2K+M-HIM2K. Our method outperforms baselines, with average results (large numbers)
and standard deviations (small numbers) on the benchmark. The upper group represents methods predicting each instance separately, while
the lower models utilize instance information. Gray rows denote public weights trained on external data, not retrained on I-HIM50K.
MGM† denotes the MGM-in-the-wild. MGM⋆ refers to MGM with all masks stacked with the input image. Models are tested on images
with a short side of 576px. Bold and underline highlight the best and second-best models per metric, respectively.
Composition set Natural set
Method
MAD MSE Grad Conn MADf MADu MAD MSE Grad Conn MADf MADu
Instance-agnostic
MGM† [39] 23.15 (1.5) 14.76 (1.3) 12.75 (0.5) 13.30 (0.9) 64.39 (4.5) 309.38 (12.0) 32.52 (6.7) 18.80 (6.0) 12.52 (1.2) 18.51 (18.5) 65.20 (15.9) 179.76 (23.9)
MGM [56] 15.32 (0.6) 9.13 (0.5) 9.94 (0.2) 8.83 (0.3) 33.54 (1.9) 261.43 (4.0) 30.23 (3.6) 17.40 (3.3) 10.53 (0.5) 15.70 (1.9) 63.16 (13.0) 167.35 (12.1)
SparseMat [50] 21.05 (1.2) 14.55 (1.0) 14.64 (0.5) 12.26 (0.7) 45.19 (2.9) 352.95 (14.2) 35.03 (5.1) 21.79 (4.7) 15.85 (1.2) 18.50 (3.1) 67.82 (15.2) 212.63 (20.8)
Instance-aware
InstMatt [49] 12.85 (0.2) 5.71 (0.2) 9.41 (0.1) 7.19 (0.1) 22.24 (1.3) 255.61 (2.0) 26.76 (2.5) 12.52 (2.0) 10.20 (0.3) 13.81 (1.1) 48.63 (6.8) 161.52 (6.9)
InstMatt [49] 16.99 (0.7) 9.70 (0.5) 10.93 (0.3) 9.74 (0.5) 53.76 (3.0) 286.90 (7.0) 28.16 (4.5) 14.30 (3.7) 10.98 (0.7) 14.63 (2.0) 57.83 (12.1) 168.74 (15.5)
MGM⋆ 14.31 (0.4) 7.89 (0.4) 10.12 (0.2) 8.01 (0.2) 41.94 (3.1) 251.08 (3.6) 31.38 (3.3) 18.38 (3.1) 10.97 (0.4) 14.75 (1.4) 53.89 (9.6) 165.13 (10.6)
MaGGIe (ours) 12.93 (0.3) 7.26 (0.3) 8.91 (0.1) 7.37 (0.2) 19.54 (1.0) 235.95 (3.4) 27.17 (3.3) 16.09 (3.2) 9.94 (0.6) 13.42 (1.4) 49.52 (8.0) 146.71 (11.6)
first layer to accept a single mask input. Additionally, we the public and retrained versions of InstMatt. A key strength
expanded MGM to handle up to 10 instances, denoted as of our approach is its proficiency in distinguishing between
MGM⋆ . We also include the public weights of InstMatt [49] different instances. This is particularly evident when com-
and MGM-in-the-wild [39]. The performance with differ- pared to MGM, where we observed overlapping instances,
ent masks M-HIM2K are reported in Table 5. The public and MGM⋆ , which has noise issues caused by processing
InstMatt showed the best performance, but this comparison multiple masks simultaneously. Our model’s refined in-
may not be entirely fair as it was trained on private exter- stance separation capabilities highlight its effectiveness in
nal data. Our model demonstrated comparable results on handling complex matting scenarios.
composite and natural sets, achieving the lowest error in
most metrics. MGM⋆ also performed well, suggesting that 5.2. Training on video data
processing multiple masks simultaneously can facilitate in-
Temporal consistency metrics. Following previous
stance interaction, although this approach slightly impacted
works [45, 48, 57], we extended our evaluation metrics to
the Grad metric, which reflects the output’s detail.
include dtSSD and MESSDdt to assess the temporal consis-
We also measure the memory and speed of models on tency of instance matting across frames.
M-HIM2K natural set in Fig. 4. While InstMatt, MGM,
Ablation studies. Our tests, detailed in Table 6, show
and SparseMat have the inference time increasing linearly
that each temporal module significantly impacts perfor-
to the number of instances, MGM⋆ and ours keep steady
mance. Omitting these modules increased errors in all sub-
performance in both memory and speed.
sets. Single-direction Conv-GRU use improved outcomes,
Qualitative results. MaGGIe’s ability to capture fine with further gains from adding backward pass fusion. For-
details and effectively separate instances is showcased ward fusion alone was less effective, possibly due to error
in Fig. 5. At the exact resolution, our model not only propagation. The optimal setup involved combining back-
achieves highly detailed outcomes comparable to running ward propagation to reduce errors, yielding the best results.
MGM separately for each instance but also surpasses both Performance evaluation. Our model was benchmarked
4 1050
Table 6. Superiority of Temporal Consistency in Feature and
950 Prediction Levels. Our MaGGIe, integrating temporal consis-
850
tency at both feature and matte levels, outperforms non-temporal
GPU Memory (GB)
750
2
methods and those with only feature level.
Time (ms)
650
550
450
1
350
Conv-GRU Fusion Easy Medium Hard
Âf Âb
250
150
Single Bi MAD dtSSD MAD dtSSD MAD dtSSD
1 50
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 10.26 16.57 13.88 23.67 21.62 30.50
Number of instances Number of instances
✓ 10.15 16.42 13.84 23.66 21.26 29.95
InstMatt SparseMat MGM MGM* Ours
✓ 10.14 16.41 13.83 23.66 21.25 29.92
Figure 4. Our model keeps steady memory and time complex- ✓ ✓ 11.32 16.51 15.33 24.08 24.97 30.66
ity when the number of instance increases. InstMatt’s complex-
✓ ✓ ✓ 10.12 16.40 13.85 23.63 21.23 29.90
ity increases linearly with the number of instances.
6
Image Input mask Groundtruth InstMatt InstMatt MGM MGM⋆ Ours
Figure 5. Enhanced Detail and Instance Separation by MaGGIe. Our model excels in rendering detailed outputs and effectively
separating instances, as highlighted by red squares (detail focus) and red arrows (errors in other methods).
Table 7. Comparative Analysis of Video Matting Methods on V-HIM60. This table categorizes methods into two groups: those utilizing
first-frame trimaps (upper group) and mask-guided approaches (lower group). Gray rows denotes models with public weights not retrained
on I-HIM50K and V-HIM50K. MGM⋆ -TCVOM represents MGM with stacked guidance masks and the TCVOM temporal module. Bold
and underline highlight the top and second-best performing models in each metric, respectively.
Easy Medium Hard
Method
MAD Grad Conn dtSSD MESSDdt MAD Grad Conn dtSSD MESSDdt MAD Grad Conn dtSSD MESSDdt
First-frame trimap
OTVM [45] 204.59 15.25 76.36 46.58 397.59 247.97 21.02 97.74 66.09 587.47 412.41 29.97 146.11 90.15 764.36
OTVM [45] 36.56 6.62 14.01 24.86 69.26 48.59 10.19 17.03 36.06 80.38 140.96 17.60 47.84 59.66 298.46
FTP-VM [17] 12.69 6.03 4.27 19.83 18.77 40.46 12.18 15.13 32.96 125.73 46.77 14.40 15.82 45.04 76.48
FTP-VM [17] 13.69 6.69 4.78 20.51 22.54 26.86 12.39 9.95 32.64 126.14 48.11 14.87 16.12 45.29 78.66
Frame-by-frame binary mask
MGM-TCVOM [45] 11.36 4.57 3.83 17.02 19.69 14.76 7.17 5.41 23.39 39.22 22.16 7.91 7.27 31.00 47.82
MGM⋆ -TCVOM [45] 10.97 4.19 3.70 16.86 15.63 13.76 6.47 5.02 23.99 42.71 22.59 7.86 7.32 32.75 37.83
InstMatt [49] 13.77 4.95 3.98 17.86 18.22 19.34 7.21 6.02 24.98 54.27 27.24 7.88 8.02 31.89 47.19
SparseMat [50] 12.02 4.49 4.11 19.86 24.75 18.20 8.03 6.87 30.19 85.79 24.83 8.47 8.19 36.92 55.98
MaGGIe (ours) 10.12 4.08 3.43 16.40 16.41 13.85 6.31 5.11 23.63 38.12 21.23 7.08 6.89 29.90 42.98
against leading methods in trimap video matting, mask- petitors in our experiments, demonstrating their robustness
guided matting, and instance matting. For trimap video in maintaining temporal consistency across frames.
matting, we chose OTVM [45] and FTP-VM [17], fine-
tuning them on our V-HIM2K5 dataset. In masked guided The comprehensive results of our model across three test
video matting, we compared our model with InstMatt [49], sets, using masks from XMem, are detailed in Table 7. All
SparseMat [50], and MGM [56] which is combined with the trimap propagation methods are underperform the mask-
TCVOM [57] module for temporal consistency. InstMatt, guided solutions. When benchmarked against other masked
after being fine-tuned on I-HIM50K and subsequently on guided matting methods, our approach consistently reduces
V-HIM2K5, processed each frame in the test set indepen- error across most metrics. Notably, it excels in temporal
dently, without temporal awareness. SparseMat, featuring a consistency, evidenced by its top performance in dtSSD
temporal sparsity fusion module, was fine-tuned under the for both easy and hard test sets, and in MESSDdt for the
same conditions as our model. MGM and its variant, inte- medium set. Additionally, our model shows superior per-
grated with the TCVOM module, emerged as strong com- formance in capturing fine details, as indicated by its lead-
ing scores in the Grad metric across all test sets. These re-
7
Input  log ∆  log ∆  log ∆  log ∆  log ∆Â
Frame
mask | {z } | {z } | {z } | {z } | {z }
InstMatt SparseMat MGM-TCVOM MGM⋆ -TCVOM MaGGIe (ours)
Figure 6. Detail and Consistency in Frame-to-Frame Predictions. This figure demonstrates the precision and temporal consistency of
our model’s alpha matte predictions, highlighting robustness against noise from input masks. The color-coded map (min-max range) to
illustrate differences between consecutive frames is .
sults underscore our model’s effectiveness in video instance ment can pose challenges, particularly when integrating in-
matting, particularly in challenging scenarios requiring high stance masks from varied sources, potentially leading to
temporal consistency and detail preservation. misalignments in certain regions. Additionally, the use of
Temporal consistency and detail preservation. Our composite training datasets may constrain the model’s abil-
model’s effectiveness in video instance matting is evident ity to generalize effectively to natural, real-world scenar-
in Fig. 6 with natural videos. Key highlights include: ios. While the creation of a comprehensive natural dataset
remains a valuable goal, we propose an interim solution:
• Handling of Random Noises: Our method effectively han-
the utilization of segmentation datasets combined with self-
dles random noise in mask inputs, outperforming others
supervised or weakly-supervised learning techniques. This
that struggle with inconsistent input mask quality.
approach could enhance the model’s adaptability and per-
• Foreground/Background Region Consistency: We main-
formance in more diverse and realistic settings, paving the
tain consistent, accurate foreground predictions across
way for future advancements in the field.
frames, surpassing InstMatt and MGM⋆ -TCVOM.
• Detail Preservation: Our model retains intricate details, Conclusion. Our study contributes to the evolving field of
matching InstMatt’s quality and outperforming MGM instance matting, with a focus that extends beyond human
variants in video inputs. subjects. By integrating advanced techniques like trans-
These aspects underscore MaGGIe’s robustness and ef- former attention and sparse convolution, MaGGIe shows
fectiveness in video instance matting, particularly in main- promising improvements over previous methods in detailed
taining temporal consistency and preserving fine details accuracy, temporal consistency, and computational effi-
across frames. ciency for both image and video inputs. Additionally, our
approach in synthesizing training data and developing a
comprehensive benchmarking schema offers a new way to
6. Discussion
evaluate the robustness and effectiveness of models in in-
Limitation and Future work. Our MaGGIe demonstrates stance matting tasks. This work represents a step forward
effective performance in human video instance matting with in video instance matting and provides a foundation for fu-
binary mask guidance, yet it also presents opportunities for ture research in this area.
further research and development. One notable limitation Acknownledgement. We sincerely appreciate Markus
is the reliance on one-hot vector representation for each lo- Woodson for the invaluable initial discussions. Addition-
cation in the guidance mask, necessitating that each pixel ally, I am deeply thankful to my wife, Quynh Phung, for
is distinctly associated with a single instance. This require- her meticulous proofreading and feedback.
8
References [19] Chuong Huynh, Yuqian Zhou, Zhe Lin, Connelly Barnes,
Eli Shechtman, Sohrab Amirghodsi, and Abhinav Shrivas-
[1] Adobe. Adobe premiere. https://www.adobe.com/ tava. Simpson: Simplifying photo cleanup with single-click
products/premiere.html, 2023. 1 distracting object segmentation network. In CVPR, 2023. 2
[2] Apple. Cutouts object ios 16. https://support. [20] Sagar Imambi, Kolla Bhanu Prakash, and GR Kanagachi-
apple.com/en-hk/102460, 2023. 1 dambaresan. Pytorch. Programming with TensorFlow: So-
[3] Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. lution for Edge Computing Applications, 2021. 5
Delving deeper into convolutional networks for learning [21] Lei Ke, Henghui Ding, Martin Danelljan, Yu-Wing Tai, Chi-
video representations. arXiv preprint arXiv:1511.06432, Keung Tang, and Fisher Yu. Video mask transfiner for high-
2015. 4 quality video instance segmentation. In ECCV, 2022. 2
[4] Arie Berman, Arpag Dadourian, and Paul Vlahos. Method [22] Zhanghan Ke, Jiayu Sun, Kaican Li, Qiong Yan, and Ryn-
for removing from an image the background surrounding a son WH Lau. Modnet: Real-time trimap-free portrait mat-
selected object, 2000. US Patent 6,134,346. 2 ting via objective decomposition. In AAAI, 2022. 2
[5] Guowei Chen, Yi Liu, Jian Wang, Juncai Peng, Yuying Hao, [23] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao,
Lutao Chu, Shiyu Tang, Zewu Wu, Zeyu Chen, Zhiliang Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer White-
Yu, et al. Pp-matting: high-accuracy natural image matting. head, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and
arXiv preprint arXiv:2204.09433, 2022. 2 Ross Girshick. Segment anything. In ICCV, 2023. 2, 3
[6] Xiangguang Chen, Ye Zhu, Yu Li, Bingtao Fu, Lei Sun, Ying [24] Philip Lee and Ying Wu. Nonlocal matting. In CVPR, 2011.
Shan, and Shan Liu. Robust human matting via semantic 2
guidance. In ACCV, 2022. 2 [25] Anat Levin, Dani Lischinski, and Yair Weiss. A closed-form
solution to natural image matting. IEEE TPAMI, 30(2), 2007.
[7] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexan-
2
der Kirillov, and Rohit Girdhar. Masked-attention mask
[26] Jizhizi Li, Sihan Ma, Jing Zhang, and Dacheng Tao. Privacy-
transformer for universal image segmentation. In CVPR,
preserving portrait matting. In ACM MM, 2021. 2
2022. 2
[27] Jizhizi Li, Jing Zhang, and Dacheng Tao. Deep automatic
[8] Ho Kei Cheng and Alexander G Schwing. Xmem: Long- natural image matting. In IJCAI, 2021. 2
term video object segmentation with an atkinson-shiffrin
[28] Jiachen Li, Vidit Goel, Marianna Ohanyan, Shant
memory model. In ECCV, 2022. 1, 5
Navasardyan, Yunchao Wei, and Humphrey Shi. Vmformer:
[9] Donghyeon Cho, Yu-Wing Tai, and Inso Kweon. Natural End-to-end video matting with transformer. arXiv preprint
image matting using deep convolutional neural networks. In arXiv:2208.12801, 2022. 3
ECCV, 2016. 2 [29] Jizhizi Li, Jing Zhang, Stephen J Maybank, and Dacheng
[10] Spconv Contributors. Spconv: Spatially sparse convolu- Tao. Bridging composite and real: towards end-to-end deep
tion library. https://github.com/traveller59/ image matting. IJCV, 2022. 2, 13
spconv, 2022. 5 [30] Jiachen Li, Roberto Henschel, Vidit Goel, Marianna
[11] Marco Forte and François Pitié. f , b, alpha matting. arXiv Ohanyan, Shant Navasardyan, and Humphrey Shi. Video
preprint arXiv:2003.07711, 2020. 1, 2 instance matting. In WACV, 2024. 2
[12] Google. Magic editor in google pixel 8. https : [31] Yaoyi Li and Hongtao Lu. Natural image matting via guided
//pixel.withgoogle.com/Pixel_8_Pro/use- contextual attention. In AAAI, 2020. 1, 2
magic-editor, 2023. 1 [32] Chung-Ching Lin, Jiang Wang, Kun Luo, Kevin Lin, Linjie
[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Li, Lijuan Wang, and Zicheng Liu. Adaptive human matting
Deep residual learning for image recognition. In CVPR, for dynamic videos. In CVPR, 2023. 2, 3
2016. 11 [33] Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta,
Brian L Curless, Steven M Seitz, and Ira Kemelmacher-
[14] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Gir-
Shlizerman. Real-time high-resolution background matting.
shick. Mask r-cnn. In ICCV, 2017. 13
In CVPR, 2021. 2, 3, 5
[15] Anna Katharina Hebborn, Nils Höhner, and Stefan Müller.
[34] Shanchuan Lin, Linjie Yang, Imran Saleemi, and Soumyadip
Occlusion matting: realistic occlusion handling for aug-
Sengupta. Robust high-resolution video matting with tempo-
mented reality applications. In 2017 IEEE International
ral guidance. In WACV, 2022. 2, 3
Symposium on Mixed and Augmented Reality (ISMAR).
[35] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays,
IEEE, 2017. 1
Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence
[16] Qiqi Hou and Feng Liu. Context-aware image matting for si- Zitnick. Microsoft coco: Common objects in context. In
multaneous foreground and alpha estimation. In ICCV, 2019. ECCV, 2014. 2
1 [36] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen,
[17] Wei-Lun Huang and Ming-Sui Lee. End-to-end video mat- and Marianna Pensky. Sparse convolutional neural networks.
ting with trimap propagation. In CVPR, 2023. 1, 2, 3, 7, In CVPR, 2015. 2
23 [37] Hao Lu, Yutong Dai, Chunhua Shen, and Songcen Xu. In-
[18] Chuong Huynh, Anh Tuan Tran, Khoa Luu, and Minh Hoai. dices matter: Learning to index for deep image matting. In
Progressive semantic segmentation. In CVPR, 2021. 2 CVPR, 2019. 1, 2
9
[38] Seoung Wug Oh, Joon-Young Lee, Ning Xu, and Seon Joo [56] Qihang Yu, Jianming Zhang, He Zhang, Yilin Wang, Zhe
Kim. Video object segmentation using space-time memory Lin, Ning Xu, Yutong Bai, and Alan Yuille. Mask guided
networks. In ICCV, 2019. 1 matting via progressive refinement network. In CVPR, 2021.
[39] Kwanyong Park, Sanghyun Woo, Seoung Wug Oh, In So 1, 2, 3, 5, 6, 7, 11, 13, 16, 17, 18, 19
Kweon, and Joon-Young Lee. Mask-guided matting in the [57] Yunke Zhang, Chi Wang, Miaomiao Cui, Peiran Ren, Xuan-
wild. In CVPR, 2023. 1, 2, 3, 6, 19 song Xie, Xian-Sheng Hua, Hujun Bao, Qixing Huang, and
[40] Khoi Pham, Kushal Kafle, Zhe Lin, Zhihong Ding, Scott Co- Weiwei Xu. Attention-guided temporally coherent video ob-
hen, Quan Tran, and Abhinav Shrivastava. Improving closed ject matting. In ACM MM, 2021. 3, 5, 6, 7
and open-vocabulary attribute prediction using transformers.
In ECCV, 2022. 2
[41] Khoi Pham, Chuong Huynh, and Abhinav Shrivastava. Com-
posing object relations and attributes for image-text match-
ing. In CVPR, 2024.
[42] Quynh Phung, Songwei Ge, and Jia-Bin Huang. Grounded
text-to-image synthesis with attention refocusing. In CVPR,
2024. 2
[43] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San-
jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy,
Aditya Khosla, Michael Bernstein, et al. Imagenet large
scale visual recognition challenge. IJCV, 2015. 13
[44] Soumyadip Sengupta, Vivek Jayaram, Brian Curless,
Steven M Seitz, and Ira Kemelmacher-Shlizerman. Back-
ground matting: The world is your green screen. In CVPR,
2020. 1
[45] Hongje Seong, Seoung Wug Oh, Brian Price, Euntai Kim,
and Joon-Young Lee. One-trimap video matting. In ECCV,
2022. 1, 2, 3, 5, 6, 7, 23
[46] Xiaoyong Shen, Xin Tao, Hongyun Gao, Chao Zhou, and
Jiaya Jia. Deep automatic portrait matting. In ECCV, 2016.
2
[47] Yanan Sun, Chi-Keung Tang, and Yu-Wing Tai. Semantic
image matting. In CVPR, 2021. 2
[48] Yanan Sun, Guanzhi Wang, Qiao Gu, Chi-Keung Tang, and
Yu-Wing Tai. Deep video matting via spatio-temporal align-
ment and aggregation. In CVPR, 2021. 3, 6
[49] Yanan Sun, Chi-Keung Tang, and Yu-Wing Tai. Human in-
stance matting via mutual guidance and multi-instance re-
finement. In CVPR, 2022. 1, 2, 3, 5, 6, 7, 11, 13, 14, 16, 17,
18, 20
[50] Yanan Sun, Chi-Keung Tang, and Yu-Wing Tai. Ultrahigh
resolution image/video matting with spatio-temporal spar-
sity. In CVPR, 2023. 2, 3, 4, 5, 6, 7, 12, 13, 16, 17, 18,
20
[51] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-
reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia
Polosukhin. Attention is all you need. NeurIPS, 30, 2017. 3
[52] Tiantian Wang, Sifei Liu, Yapeng Tian, Kai Li, and Ming-
Hsuan Yang. Video matting via consistency-regularized
graph neural networks. In ICCV, 2021. 3, 5
[53] Yumeng Wang, Bo Xu, Ziwen Li, Han Huang, Cheng Lu,
and Yandong Guo. Video object matting via hierarchical
space-time semantic guidance. In WACV, 2023. 2, 3
[54] Ning Xu, Brian Price, Scott Cohen, and Thomas Huang.
Deep image matting. In CVPR, 2017. 2
[55] Zongxin Yang, Yunchao Wei, and Yi Yang. Associating
objects with transformers for video object segmentation.
NeurIPS, 2021. 2, 3, 11
10
MaGGIe: Masked Guided Gradual Human Instance Matting
Supplementary Material
Contents
Look-up 🔍 …
7 . Architecture details . . . . . . . . . . . . . . . 11
7.1 . Mask guidance identity embedding . . . . 11 Image Features Uncertainty * MLP …
7.2 . Feature extractor . . . . . . . . . . . . . . 11 𝐅!! indices
8 . Image matting . . . . . . . . . . . . . . . . . . 13
8.1 . Dataset generation and preparation . . . . . 13 have C8 = 128, C4 = 64, C1 = C2 = 32.
8.2 . Training details . . . . . . . . . . . . . . . 13
8.3 . Quantitative details . . . . . . . . . . . . . 14
7.3. Dense-image to sparse-instance features
8.4 . More qualitative results on natural images . 15
We express the Eq. (2) as the visualization in Fig. 7. It in-
9 . Video matting . . . . . . . . . . . . . . . . . . 22 volves extracting feature vectors F̄(x, y, t) and instance to-
9.1 . Dataset generation . . . . . . . . . . . . . 22 ken vectors Ti for each uncertainty index (x, y, t, i) ∈ U.
9.2 . Training details . . . . . . . . . . . . . . . 22 These vectors undergo channel-wise multiplication, empha-
9.3 . Quantitative details . . . . . . . . . . . . . 23 sizing channels relevant to each instance. A subsequent
9.4 . More qualitative results . . . . . . . . . . . 23 MLP layer then converts this product into sparse, instance-
specific features.
7. Architecture details
This section delves into the architectural nuances of our 7.4. Detail aggregation
framework, providing a more detailed exposition of com-
ponents briefly mentioned in the main paper. These insights This process, akin to a U-net decoder, aggregates features
are crucial for a comprehensive understanding of the under- from different scales, as detailed in Fig. 8. It involves
lying mechanisms of our approach. upscaling sparse features and merging them with corre-
sponding higher-scale features. However, this requires pre-
7.1. Mask guidance identity embedding
computed downscale indices from dummy sparse convolu-
We embed mask guidance into a learnable space before in- tions on the full input image.
putting it into our network. This approach, inspired by the
ID assignment in AOT [55], generates a guidance embed- 7.5. Sparse matte head
ding E ∈ RT ×Ce ×H×W by mapping embedding vectors
D ∈ RN ×Ce to pixels based on the guidance mask M: Our matte head design, inspired by MGM [56], comprises
two sparse convolutions with intermediate normalization
\mathbf {E}(x,y) = \mathbf {M}(x,y) \mathbf {D}. (4)
and activation (Leaky ReLU) layers. The final output un-
Here, E(x, y) ∈ RT ×Ce and M(x, y) ∈ {0, 1}T ×N rep- dergoes sigmoid activation for the final prediction. Non-
resent the values at row y and column x in E and M, re- refined locations in the dense prediction are assigned a value
spectively. In our experiment, we set N = 10, but it can be of zero.
any larger number without affecting the architecture signif-
icantly. 7.6. Sparse progressive refinement
7.2. Feature extractor
The PRM module progressively refines A8 → A4 → A1 to
In our experiments, we employ ResNet-29 [13] as the fea- have A. We assume that all predictions are rescaled to the
ture extractor, consistent with other baselines [49, 56]. We largest size and perform refinement between intermediate
11
predictions and uncertainty indices U:
\mathbf {A} &= \mathbf {A}_8 \\ \mathbf {R}_4(j) &= \begin {cases} 1 \text {, if } j \in \mathcal {D}(\mathbf {A}) \text { and } j \in \mathbf {U} \\ 0 \text {, otherwise } \end {cases} \\ \mathbf {A} &= \mathbf {A} \x (1 - \mathbf {R}_4) + \mathbf {R}_4 \x \mathbf {A}_4 \\ \mathbf {R}_1(j) &= \begin {cases} 1 \text {, if } j \in \mathcal {D}(\mathbf {A}) \text { and } j \in \mathbf {U} \\ 0 \text {, otherwise } \end {cases} \\ \mathbf {A} &= \mathbf {A} \x (1 - \mathbf {R}_1) + \mathbf {R}_1 \x \mathbf {A}_4
(9)
SparseMat [50] Ours
where j = (x, y, t, i) is an index in the output; R1 , R4 in
shape T × N × H × W ; and D(A) = dilation(0 < A < 1) Figure 9. Temporal Sparsity Between Two Consecutive
is the indices of all dilated uncertainty values on A. The Frames. The top row displays a pair of successive frames. Below,
dilation kernel is set to 30, 15 for R4 , R1 respectively. the second row illustrates the predicted differences by two distinct
frameworks, with areas of discrepancy emphasized in white. In
7.7. Attention loss and loss weight contrast to SparseMat’s output, which appears cluttered and noisy,
our module generates a more refined sparsity map. This map ef-
With Agt as the ground-truth alpha matte and its 18 down- fectively accentuates the foreground regions that undergo notable
scaled version Agt gt gt
8 , we define a binarized Ã8 = A8 > 0.
changes between the frames, providing a clearer and more focused
The attention loss Latt is: representation of temporal sparsity. (Best viewed in color).
Detail Sparse-Instance Features 𝐗′" /𝐗′# \begin {split} \mathbf {A}^f(t, i) &= \Delta (t) \x \mathbf {A}(t, i) \\ & + (1 - \Delta (t)) \x \mathbf {A}^f(t-1, i) \\ \end {split}
(12)
Inverse Sparse Convolution Sparse Convolution Layer
… …
\begin {split}\
\mathbf {A}^b(t, i) &= \Delta (t+1) \x \mathbf {A}(t, i) \\ & + (1 - \Delta (t+1)) \x \mathbf {A}^b(t+1, i) \\ \end {split}
(13)
Detail Sparse-Instance Features 𝐗 ! /𝐗 " Detail Sparse-Instance Features 𝐗 " /𝐗#
Figure 8. Detail Aggregation Module merges sparse features Each entry j = (x, y, t, i) on final output Atemp is:
across scales. This module equalizes spatial scales of sparse fea-
tures using inverse sparse convolution, facilitating their combina-
tion. \mathbf {A}^{\text {temp}}(j) = \begin {cases} \mathbf {A}(j), \text { if } \mathbf {A}^f(j) \neq \mathbf {A}^b(j) \\ \mathbf {A}^f(j), \text { otherwise} \end {cases} (14)
12
This fusion enhances temporal consistency and minimizes Table 8. Ten models with vary mask quality are used in M-
error propagation. HIM2K. The MaskRCNN models are from detectron2 trained on
COCO with different settings.
8. Image matting
Model COCO mask AP (%)
This section expands on the image matting process, provid-
r50 c4 3x 34.4
ing additional insights into dataset generation and compre-
hensive comparisons with existing methods. We delve into r50 dc5 3x 35.9
the creation of I-HIM50K and M-HIM2K datasets, offer de- r101 c4 3x 36.7
tailed quantitative analyses, and present further qualitative
r50 fpn 3x 37.2
results to underscore the effectiveness of our approach.
r101 fpn 3x 38.6
8.1. Dataset generation and preparation x101 fpn 3x 39.5
The I-HIM50K dataset was synthesized from the r50 fpn 400e 42.5
HHM50K [50] dataset, which is known for its exten-
sive collection of human image mattes. We employed a regnety 400e 43.3
MaskRCNN [14] Resnet-50 FPN 3x model, trained on the regnetx 400e 43.5
COCO dataset, to filter out single-person images, resulting r101 fpn 400e 43.7
in a subset of 35,053 images. Following the InstMatt [49]
methodology, these images were composited against old of 70%. Masks that did not meet this threshold were ar-
diverse backgrounds from the BG20K [29] dataset, creating tificially generated from ground truth. This process resulted
multi-instance scenarios with 2-5 subjects per image. in a comprehensive set of 134,240 masks, with 117,660 for
The subjects were resized and positioned to maintain a composite and 16,600 for natural images, providing a robust
realistic scale and avoid excessive overlap, as indicated by benchmark for evaluating masked guided instance matting.
instance IoUs not exceeding 30%. This process yielded The full dataset I-HIM50K and M-HIM2K will be released
49,737 images, averaging 2.28 instances per image. During after the acceptance of this work.
training, guidance masks were generated by binarizing the
alpha mattes and applying random dropout, dilation, and 8.2. Training details
erosion operations. Sample images from I-HIM50K are
displayed in Fig. 10. We initialized our feature extractor with ImageNet [43]
The M-HIM2K dataset was designed to test model ro- weights, following previous methods [49, 56]. Our mod-
bustness against varying mask qualities. It comprises ten els were retrained on the I-HIM50K dataset with a crop size
masks per instance, generated using various MaskRCNN
models. More information about models used for this gen-
eration process is shown in Table 8. The masks were
matched to instances based on the highest IoU with the
ground truth alpha mattes, ensuring a minimum IoU thresh-
13
Table 9. Full details of different input mask setting on HIM2K+M-HIM2K. (Extension of Table 3). Bold denotes the lowest average
error.
Composition Natural
Mask input
MAD MADf MADu MSE SAD Grad Conn MAD MADf MADu MSE SAD Grad Conn
27.01 68.83 381.27 18.82 16.35 16.80 15.72 39.29 61.39 213.27 25.10 25.52 16.44 23.26 mean
Stacked
0.83 5.93 7.06 0.76 0.50 0.31 0.51 4.21 13.37 14.10 4.01 2.00 0.70 2.02 std
19.18 68.04 330.06 12.40 11.64 13.00 11.16 33.60 60.35 188.44 20.63 21.40 13.44 19.18 mean
Embeded(Ce = 1)
0.87 8.07 6.96 0.80 0.52 0.27 0.52 4.07 12.60 12.28 3.86 1.81 0.57 1.83 std
21.74 84.64 355.95 14.46 13.23 14.39 12.69 35.16 59.55 193.95 21.93 22.59 14.51 20.40 mean
Embeded(Ce = 2)
0.92 7.33 7.68 0.85 0.55 0.27 0.55 4.23 13.79 13.45 4.03 2.31 0.61 2.32 std
17.75 53.23 315.43 11.19 10.79 12.52 10.32 33.06 56.69 189.59 20.22 19.43 13.11 17.30 mean
Embeded(Ce = 3)
0.66 5.04 6.31 0.60 0.39 0.24 0.39 3.74 11.90 12.49 3.58 1.92 0.51 1.95 std
24.79 73.22 384.14 17.07 15.09 16.19 14.58 34.25 65.57 216.56 20.39 21.89 15.66 19.70 mean
Embeded(Ce = 5)
0.88 4.99 7.24 0.79 0.52 0.30 0.52 4.16 13.59 13.09 3.96 2.31 0.58 2.32 std
Table 10. Full details of different training objective components on HIM2K+M-HIM2K. (Extension of Table 4). Bold denotes the
lowest average error.
Composition Natural
Latt W8
MAD MADf MADu MSE SAD Grad Conn MAD MADf MADu MSE SAD Grad Conn
31.77 52.70 294.22 24.13 18.92 16.58 18.27 46.68 51.23 176.60 33.61 32.89 15.68 30.64 mean
0.90 4.92 5.24 0.85 0.54 0.26 0.54 3.64 10.27 9.58 3.47 1.85 0.50 1.85 std
25.41 104.24 342.19 18.36 15.29 14.53 14.75 46.30 87.18 210.72 32.93 31.40 15.84 29.26 mean
✓
0.72 6.15 5.53 0.67 0.43 0.23 0.43 3.71 11.68 10.62 3.55 1.85 0.50 1.86 std
17.56 53.51 302.07 11.24 10.65 12.34 10.22 32.95 51.11 183.13 20.41 19.23 13.29 17.06 mean
✓
0.75 6.32 6.32 0.70 0.45 0.27 0.45 3.34 10.25 10.99 3.19 2.04 0.60 2.06 std
17.55 47.81 301.96 11.23 10.68 12.34 10.19 32.03 53.15 183.42 19.42 19.60 13.16 17.43 mean
✓ ✓
0.68 5.21 5.73 0.63 0.41 0.25 0.41 3.48 10.77 11.18 3.32 1.92 0.55 1.94 std
512. All baselines underwent 100 training epochs, using model with baseline methods on HIM2K and M-HIM2K are
the HIM2K composition set for validation. The training presented in Table 12. This analysis highlights the impact of
was conducted on 4 A100 GPUs with a batch size 96. We mask quality on matting output, with our model demonstrat-
employed AdamW for optimization, with a learning rate of ing consistent performance even with varying mask inputs.
1.5 × 10−4 and a cosine decay schedule post 1,500 warm- We also perform another experiment when the MGM-
up iterations. The training also incorporated curriculum style refinement replaces our proposed sparse guided pro-
learning as MGM and standard augmentation as other base- gressive refinement. The Table 11 shows the results where
lines. During training, mask orders were shuffled, and some our proposed method outperforms the previous approach in
masks were randomly omitted. In testing, images were re-
sized to have a short side of 576 pixels.
Table 11. Compare between previous dense progressive refine-
8.3. Quantitative details ment (PR) - MGM and our proposed guided sparse progres-
sive refinement. Numbers are mean on HIM2K+M-HIM2K and
We extend the ablation study from the main paper, provid- small numbers indicate the std.
ing detailed statistics in Table 9 and Table 10. These ta-
bles offer insights into the average and standard deviation PR MAD MSE Grad Conn MADf MADu
of performance metrics across HIM2K [49] and M-HIM2K Comp Set
datasets. Our model not only achieves competitive aver- MGM 14.70 (0.4) 8.87 (0.3) 10.39 (0.2) 8.44 (0.2) 32.02 (1.5) 252.34 (4.2)
age results but also maintains low variability in performance Ours 12.93 (0.3) 7.26 (0.3) 8.91 (0.1) 7.37 (0.2) 19.54 (1.0) 235.95 (3.4)
across different error metrics. Additionally, we include the Natural Set
Sum Absolute Difference (SAD) metric, aligning with pre-
MGM 27.66 (4.1) 16.94 (3.9) 10.49 (0.7) 13.95 (1.5) 52.72 (12.1) 150.71 (13.3)
vious image matting benchmarks.
Ours 27.17 (3.3) 16.09 (3.2) 9.94 (0.6) 13.42 (1.4) 49.52 (8.0) 146.71 (11.6)
Comprehensive quantitative results comparing our
14
demonstrated in Fig. 16. Here, we highlight the challenges
faced by MGM variants and SparseMat in predicting miss-
ing parts in mask inputs, which our model addresses. How-
ever, it is important to note that our model is not designed
as a human instance segmentation network. As shown
in Fig. 17, our framework adheres to the input guidance,
ensuring precise alpha matte prediction even with multiple
instances in the same mask.
Lastly, Fig. 12 and Fig. 11 emphasize our model’s gen-
eralization capabilities. The model accurately extracts both
human subjects and other objects from backgrounds, show-
casing its versatility across various scenarios and object
types.
All examples are Internet images without groundtruth
and the mask from r101 fpn 400e are used as the guidance.
all metrics.
15
Image Mask InstMatt [49] InstMatt [49] SparseMat MGM [56] MGM⋆ Ours
(public) [50]
Figure 13. Our model produces highly detailed alpha matte on natural images. Our results show that it is accurate and comparable
with previous instance-agnostic and instance-awareness methods without expensive computational costs. Red squares zoom in the detail
regions for each instance. (Best viewed in color and digital zoom).
16
Image Mask InstMatt [49] InstMatt [49] SparseMat MGM [56] MGM⋆ Ours
(public) [50]
Figure 14. Our frameworks precisely separate instances in an extreme case with many instances. While MGM often causes the
overlapping between instances and MGM⋆ contains noises, ours produces on-par results with InstMatt trained on the external dataset. Red
arrow indicates the errors. (Best viewed in color and digital zoom).
17
Image Mask InstMatt [49] InstMatt [49] SparseMat MGM [56] MGM⋆ Ours
(public) [50]
Figure 15. Our frameworks precisely separate instances in a single pass. The proposed solution shows comparable results with InstMatt
and MGM without running the prediction/refinement five times. Red arrow indicates the errors. (Best viewed in color and digital zoom).
Image Mask InstMatt [49] InstMatt [49] SparseMat MGM [56] MGM⋆ Ours
(public) [50]
Figure 16. Unlike MGM and SparseMat, our model is robust to the input guidance mask. With the attention head, our model
produces more stable results to mask inputs without complex refinement between instances like InstMatt. Red arrow indicates the errors.
(Best viewed in color and digital zoom).
Image Mask InstMatt [49] InstMatt [49] SparseMat MGM [56] MGM⋆ Ours
(public) [50]
Figure 17. Our solution works correctly with multi-instance mask guidances. When multiple instances exist in one guidance mask,
we still produce the correct union alpha matte for those instances. Red arrow indicates the errors or the zoom-in region in red box. (Best
viewed in color and digital zoom).
18
Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without
retraining.
Instance-agnostic
25.79 69.67 331.73 17.00 15.65 13.64 14.91 48.05 103.81 233.85 32.66 27.44 14.72 25.07 r50 c4 3x
24.75 70.92 316.59 16.21 15.01 13.17 14.23 34.67 66.28 183.48 21.03 22.82 12.79 20.30 r50 dc5 3x
23.60 66.79 321.23 15.03 14.38 13.19 13.62 35.51 70.94 198.99 20.96 22.62 13.73 20.17 r101 c4 3x
24.55 67.27 316.29 15.97 14.91 13.14 14.12 33.66 67.41 184.99 19.93 21.99 13.06 19.43 r50 fpn 3x
23.42 66.37 310.99 14.94 14.21 12.84 13.42 35.14 72.30 183.87 21.02 21.87 12.82 19.34 r101 fpn 3x
22.71 63.35 305.67 14.36 13.81 12.64 13.03 31.06 61.76 175.33 17.60 20.98 12.61 18.44 x101 fpn 3x
MGM
[39]
22.03 61.91 300.29 13.85 13.36 12.30 12.59 29.16 57.59 165.22 15.93 20.10 11.76 17.56 r50 fpn 400e
21.37 57.28 296.73 13.18 12.98 12.16 12.21 26.40 51.24 158.95 13.42 17.73 11.45 15.10 regnety 400e
21.78 60.31 297.14 13.62 13.22 12.25 12.46 27.09 49.26 160.05 13.82 17.48 11.20 14.87 regnetx 400e
21.52 60.07 297.14 13.44 13.14 12.20 12.38 24.41 51.46 152.90 11.62 17.43 11.09 14.84 r101 fpn 400e
23.15 64.39 309.38 14.76 14.07 12.75 13.30 32.52 65.20 179.76 18.80 21.05 12.52 18.51 mean
1.52 4.49 12.01 1.30 0.92 0.52 0.92 6.74 15.94 23.87 5.99 3.09 1.17 3.16 std
15.94 32.55 266.64 9.62 9.68 10.11 9.18 37.55 86.64 191.09 24.03 21.15 11.34 18.94 r50 c4 3x
16.05 36.36 264.96 9.81 9.75 10.10 9.26 32.58 68.52 172.83 19.58 20.17 10.92 17.80 r50 dc5 3x
15.40 30.89 264.28 9.17 9.37 10.01 8.90 31.24 69.59 175.67 18.15 18.57 10.83 16.26 r101 c4 3x
15.93 34.54 265.44 9.68 9.67 10.10 9.20 32.83 75.06 173.63 19.72 19.13 10.85 16.81 r50 fpn 3x
15.74 34.23 263.35 9.50 9.55 10.02 9.07 30.77 69.10 171.92 17.78 18.22 10.67 15.95 r101 fpn 3x
15.23 36.18 260.80 9.03 9.27 9.92 8.76 30.09 63.23 167.58 17.34 18.51 10.69 16.09 x101 fpn 3x
MGM
[56]
14.96 34.13 259.17 8.81 9.08 9.83 8.61 28.28 50.35 158.02 15.71 17.71 10.24 15.25 r50 fpn 400e
14.53 31.71 256.33 8.41 8.83 9.73 8.35 26.95 49.55 155.63 14.43 15.69 9.98 13.34 regnety 400e
14.82 33.06 257.09 8.69 9.01 9.80 8.53 26.61 47.81 154.05 14.22 15.45 9.87 13.16 regnetx 400e
14.65 31.71 256.29 8.53 8.94 9.74 8.46 25.42 51.73 153.11 13.03 15.73 9.90 13.44 r101 fpn 400e
15.32 33.54 261.43 9.13 9.31 9.94 8.83 30.23 63.16 167.35 17.40 18.03 10.53 15.70 mean
0.57 1.88 4.00 0.51 0.34 0.15 0.34 3.62 12.97 12.14 3.26 1.93 0.50 1.94 std
19
Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without
retraining. (Continued)
23.14 47.59 378.89 16.37 13.97 15.56 13.54 46.28 101.48 255.98 31.99 26.81 17.97 24.82 r50 c4 3x
21.94 49.48 358.08 15.36 13.24 14.90 12.80 36.93 67.62 213.46 23.76 22.11 16.05 20.01 r50 dc5 3x
21.78 43.36 368.59 15.15 13.16 15.21 12.72 38.32 77.98 234.69 24.51 22.83 17.19 20.78 r101 c4 3x
21.94 47.00 361.30 15.33 13.24 14.99 12.80 37.16 74.18 218.62 23.95 21.95 16.39 19.86 r50 fpn 3x
21.43 46.51 356.43 14.88 12.93 14.81 12.48 35.95 72.78 218.46 22.62 20.67 16.11 18.58 r101 fpn 3x
20.63 47.73 349.81 14.12 12.48 14.58 12.02 34.32 64.51 209.64 21.10 20.44 16.03 18.33 x101 fpn 3x
SparseMat
[50]
20.29 44.20 342.14 13.93 12.22 14.21 11.76 31.44 57.51 197.53 18.58 19.49 14.96 17.35 r50 fpn 400e
19.65 41.20 340.38 13.29 11.85 14.08 11.38 30.21 48.53 194.90 17.32 17.47 14.82 15.31 regnety 400e
19.90 41.40 336.40 13.56 12.02 14.03 11.56 29.85 52.17 191.09 16.99 17.19 14.52 15.03 regnetx 400e
19.81 43.43 337.43 13.50 12.01 14.05 11.55 29.83 61.40 191.89 17.07 17.13 14.48 14.96 r101 fpn 400e
21.05 45.19 352.95 14.55 12.71 14.64 12.26 35.03 67.82 212.63 21.79 20.61 15.85 18.50 mean
1.17 2.85 14.24 1.02 0.70 0.54 0.71 5.13 15.19 20.77 4.68 3.03 1.16 3.08 std
Instance-awareness
12.98 23.71 257.74 5.76 7.94 9.47 7.27 31.15 60.03 174.10 15.91 18.12 10.64 15.73 r50 c4 3x
13.15 23.08 257.38 5.96 8.05 9.48 7.38 28.05 51.53 164.19 13.63 16.89 10.33 14.53 r50 dc5 3x
12.99 22.42 257.52 5.79 7.93 9.47 7.26 27.06 48.52 162.72 12.90 16.06 10.29 13.68 r101 c4 3x
13.13 20.60 256.70 5.90 8.03 9.47 7.36 28.31 49.87 164.16 13.97 16.86 10.37 14.49 r50 fpn 3x
13.04 23.98 257.51 5.85 7.96 9.45 7.28 28.92 59.32 168.72 14.37 16.98 10.40 14.64 r101 fpn 3x
12.77 22.16 255.33 5.63 7.83 9.40 7.16 27.02 46.39 162.89 12.82 16.49 10.27 14.08 x101 fpn 3x
InstMatt
[49]
12.61 21.31 254.27 5.55 7.71 9.36 7.05 25.33 44.84 157.03 11.23 15.54 9.97 13.18 r50 fpn 400e
12.58 23.53 253.85 5.57 7.69 9.35 7.03 24.34 41.62 154.89 10.65 15.22 10.00 12.85 regnety 400e
12.59 20.48 252.68 5.53 7.71 9.35 7.04 24.18 40.96 154.69 10.09 14.68 9.82 12.28 regnetx 400e
12.67 21.14 253.13 5.60 7.75 9.35 7.09 23.22 43.23 151.78 9.67 15.00 9.88 12.60 r101 fpn 400e
12.85 22.24 255.61 5.71 7.86 9.41 7.19 26.76 48.63 161.52 12.52 16.18 10.20 13.81 mean
0.23 1.31 2.00 0.16 0.14 0.06 0.13 2.48 6.76 6.94 2.05 1.08 0.26 1.08 std
18.23 57.23 298.66 10.51 11.06 11.33 10.45 37.91 86.84 202.20 22.28 21.31 12.22 19.11 r50 c4 3x
17.85 58.98 291.50 10.38 10.87 11.13 10.27 30.10 63.83 173.94 15.90 18.01 11.25 15.82 r50 dc5 3x
17.25 51.21 292.66 9.80 10.50 11.13 9.90 30.22 59.65 178.94 15.62 17.49 11.55 15.23 r101 c4 3x
17.69 55.80 292.90 10.22 10.80 11.19 10.19 30.27 60.16 175.66 16.44 17.38 11.33 15.13 r50 fpn 3x
17.18 55.67 288.95 9.85 10.45 11.02 9.84 28.80 60.88 170.89 14.55 16.88 11.12 14.69 r101 fpn 3x
16.65 53.37 284.66 9.41 10.16 10.85 9.56 27.77 55.06 168.20 14.14 16.91 11.04 14.70 x101 fpn 3x
InstMatt
[49]
16.29 52.00 281.15 9.21 9.88 10.69 9.29 25.51 52.89 156.40 12.15 15.90 10.47 13.70 r50 fpn 400e
15.99 50.92 279.15 8.97 9.71 10.65 9.12 24.82 45.83 156.46 11.83 15.14 10.43 12.94 regnety 400e
16.47 51.85 280.00 9.37 10.01 10.69 9.42 23.73 47.85 153.70 10.35 14.69 10.17 12.49 regnetx 400e
16.30 50.58 279.40 9.29 9.95 10.63 9.36 22.47 45.33 150.96 9.72 14.71 10.17 12.50 r101 fpn 400e
16.99 53.76 286.90 9.70 10.34 10.93 9.74 28.16 57.83 168.74 14.30 16.84 10.98 14.63 mean
0.76 2.96 6.95 0.53 0.47 0.26 0.46 4.45 12.15 15.45 3.65 1.97 0.66 1.97 std
20
Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without
retraining. (Continued)
14.87 46.70 256.01 8.32 8.99 10.31 8.32 37.36 65.40 181.68 23.97 20.50 11.66 17.45 r50 c4 3x
14.65 43.00 253.75 8.21 8.87 10.25 8.22 33.70 60.48 172.03 20.83 18.51 11.29 15.93 r50 dc5 3x
14.36 38.88 252.30 7.89 8.71 10.19 8.04 33.95 60.54 173.47 20.59 17.94 11.24 15.30 r101 c4 3x
14.68 44.85 254.50 8.21 8.88 10.24 8.22 33.29 54.82 170.89 20.21 18.28 11.27 15.55 r50 fpn 3x
14.70 44.68 254.29 8.24 8.89 10.21 8.25 32.07 68.47 171.41 18.80 17.44 11.07 14.84 r101 fpn 3x
14.27 43.56 251.19 7.83 8.68 10.13 8.00 30.96 50.90 166.14 18.02 17.53 11.07 14.91 x101 fpn 3x
MGM⋆
13.94 38.70 248.02 7.58 8.46 10.00 7.79 29.86 48.23 158.22 16.92 16.91 10.79 14.32 r50 fpn 400e
13.57 39.12 246.18 7.24 8.21 9.89 7.56 28.53 46.70 156.07 15.84 15.98 10.52 13.38 regnety 400e
14.11 41.69 247.92 7.75 8.57 10.00 7.91 27.17 41.88 150.59 14.42 15.35 10.36 12.75 regnetx 400e
13.95 38.26 246.60 7.60 8.48 9.95 7.83 26.89 41.53 150.85 14.23 15.74 10.42 13.12 r101 fpn 400e
14.31 41.94 251.08 7.89 8.67 10.12 8.01 31.38 53.89 165.13 18.38 17.42 10.97 14.75 mean
0.42 3.05 3.63 0.35 0.24 0.15 0.24 3.34 9.56 10.59 3.11 1.53 0.43 1.43 std
13.13 17.81 239.98 7.41 7.92 9.05 7.47 34.54 64.64 171.51 23.05 18.36 11.02 16.23 r50 c4 3x
13.28 21.29 238.15 7.61 8.03 9.03 7.58 27.66 52.90 149.52 16.56 16.05 10.15 13.90 r50 dc5 3x
13.20 19.24 240.33 7.49 7.98 9.07 7.53 29.04 54.52 154.34 17.75 16.72 10.53 14.58 r101 c4 3x
13.20 19.37 237.53 7.52 7.98 8.98 7.53 28.50 53.64 150.67 17.37 15.91 10.18 13.74 r50 fpn 3x
13.02 20.89 238.27 7.35 7.91 8.98 7.45 28.32 52.55 150.76 17.21 15.87 10.12 13.71 r101 fpn 3x
12.98 19.27 236.44 7.32 7.87 8.93 7.41 27.12 51.27 146.81 16.12 15.92 10.00 13.76 x101 fpn 3x
Ours
12.65 19.92 233.05 7.01 7.64 8.80 7.18 24.72 44.25 137.65 13.83 14.83 9.60 12.68 r50 fpn 400e
12.55 19.59 231.94 6.93 7.58 8.73 7.12 24.99 41.32 139.09 14.02 14.32 9.38 12.15 regnety 400e
12.60 19.04 231.50 6.96 7.65 8.78 7.19 23.64 39.60 134.20 12.69 14.12 9.27 11.94 regnetx 400e
12.69 19.01 232.26 7.05 7.69 8.78 7.23 23.16 40.47 132.55 12.25 13.67 9.17 11.49 r101 fpn 400e
12.93 19.54 235.95 7.26 7.82 8.91 7.37 27.17 49.52 146.71 16.09 15.58 9.94 13.42 mean
0.28 0.99 3.44 0.25 0.17 0.13 0.17 3.34 7.95 11.60 3.16 1.39 0.59 1.41 std
21
Table 13. The effectiveness of proposed temporal consistency Table 14. Our framework outperforms baselines in almost
modules on V-HIM60 (Extension of Table 6). The combination of metrics on V-HIM60 (Extension of Table 7). We extend the result
bi-directional Conv-GRU and forward-backward fusion achieves in the main paper with more metrics and our model is the best over-
the best overall performance on three test sets. Bold highlights the all. Bold and underline indicates the best and second-best model
best for each level. among baselines in the same test set.
Conv-GRU Fusion Model MAD MADf MADu MSE SAD Grad Conn dtSSD MESSDdt
MAD MADf MADu MSE SAD Grad Conn dtSSD MESSDdt
Single Bi Âf Âb Easy level
Easy level MGM-TCVOM 11.36 18.49 202.28 5.13 4.11 4.57 3.83 17.02 19.69
10.26 13.64 192.97 4.08 3.73 4.12 3.47 16.57 16.55 MGM⋆ -TCVOM 10.97 20.33 187.59 5.04 3.98 4.19 3.70 16.86 15.63
✓ 10.15 12.83 192.69 4.03 3.71 4.09 3.44 16.42 16.44 InstMatt 13.77 38.17 219.00 5.32 4.96 4.95 3.98 17.86 18.22
✓ 10.14 12.70 192.67 4.05 3.70 4.09 3.44 16.41 16.42 SparseMat 12.02 21.00 205.41 6.31 4.37 4.49 4.11 19.86 24.75
✓ ✓ 11.32 20.13 194.27 5.01 4.10 4.67 3.85 16.51 17.85 Ours 10.12 12.60 192.63 4.02 3.68 4.08 3.43 16.40 16.41
✓ ✓ ✓ 10.12 12.60 192.63 4.02 3.68 4.08 3.43 16.40 16.41 Medium level
Medium level MGM-TCVOM 14.76 4.92 218.18 5.85 5.86 7.17 5.41 23.39 39.22
13.88 4.78 202.20 5.27 5.56 6.30 5.11 23.67 38.90 MGM⋆ -TCVOM 13.76 4.61 201.58 5.50 5.49 6.47 5.02 23.99 42.71
✓ 13.84 4.56 202.13 5.44 5.70 6.35 5.14 23.66 38.25 InstMatt 19.34 35.05 223.39 7.50 7.55 7.21 6.02 24.98 54.27
✓ 13.83 4.52 202.02 5.39 5.63 6.33 5.12 23.66 38.22 SparseMat 18.20 10.59 250.89 10.06 7.30 8.03 6.87 30.19 85.79
✓ ✓ 15.33 9.02 207.61 6.45 6.09 7.56 5.64 24.08 39.82 Ours 13.85 4.48 202.02 5.37 5.53 6.31 5.11 23.63 38.12
✓ ✓ ✓ 13.85 4.48 202.02 5.37 5.53 6.31 5.11 23.63 38.12 Hard level
Hard level MGM-TCVOM 22.16 31.89 271.27 11.80 7.65 7.91 7.27 31.00 47.82
21.62 30.06 253.94 11.69 7.38 7.07 7.01 30.50 43.54 MGM⋆ -TCVOM 22.59 36.01 264.31 13.03 7.75 7.86 7.32 32.75 37.83
✓ 21.26 28.60 253.42 11.46 7.25 7.12 6.95 29.95 43.03 InstMatt 27.24 58.23 275.07 14.40 9.23 7.88 8.02 31.89 47.19
✓ 21.25 28.55 253.17 11.56 7.25 7.10 6.91 29.92 43.01 SparseMat 24.83 32.26 312.22 15.87 8.53 8.47 8.19 36.92 55.98
✓ ✓ 24.97 45.62 260.08 14.62 8.55 9.92 8.17 30.66 48.03 Ours 21.23 28.49 252.87 11.53 7.24 7.08 6.89 29.90 42.98
✓ ✓ ✓ 21.23 28.49 252.87 11.53 7.24 7.08 6.89 29.90 42.98
22
9.3. Quantitative details
Our ablation study, detailed in Table 13, focuses on vari-
ous temporal consistency components. The results demon-
strate that our proposed combination of Bi-Conv-GRU and
forward-backward fusion outperforms other configurations
across all metrics. Additionally, Table 14 compares our
model’s performance against previous baselines using vari-
ous error metrics. Our model consistently achieves the low-
est error rates in almost all metrics.
An illustrative comparison of the impact of different tem-
poral modules is presented in Fig. 18. The addition of Conv-
GRU significantly reduces noise, although some residual
noise remains. Implementing forward fusion Âf enhances
temporal consistency but also propagates errors from previ-
ous frames. This issue is effectively addressed by integrat-
ing Âb , which balances and corrects these errors, improving
overall performance.
In an additional experiment, we evaluated trimap-
propagation matting models (OTVM [45], FTP-VM [17]),
which typically receive a trimap for the first frame and prop-
agate it through the remaining frames. To make a fair com-
parison with our approach, which utilizes instance masks
for each frame, we integrated our model with these trimap-
propagation models. The trimap predictions were binarized
and used as input for our model. The results, as shown
in Table 15, indicate a significant improvement in accu-
racy when our model is used, compared to the original
matte decoder of the trimap-propagation models. This ex-
periment underscores the flexibility and robustness of our
proposed framework, which is capable of handling various
mask qualities and mask generation methods.
23
Table 15. Our framework also reduces the errors of trimap propagation baselines. When replacing those models’ matte decoders with
ours, the number in all error metrics was reduced by a large margin. Gray rows denote the module from public weights without retraining
on our V-HIM2K5 dataset.
Trimap prediction Matte decoder MAD MADf MADu MSE SAD Grad Conn dtSSD MESSDdt
Easy level
OTVM OTVM 204.59 6.65 208.06 192.00 76.90 15.25 76.36 46.58 397.59
OTVM OTVM 36.56 299.66 382.45 29.08 14.16 6.62 14.01 24.86 69.26
OTVM Ours 31.00 260.25 326.53 24.58 12.15 5.76 11.94 22.43 55.19
FTP-VM FTP-VM 12.69 9.13 233.71 5.37 4.66 6.03 4.27 19.83 18.77
FTP-VM FTP-VM 13.69 24.54 269.88 6.12 5.07 6.69 4.78 20.51 22.54
FTP-VM Ours 9.03 4.77 194.14 3.07 3.31 3.94 3.08 16.41 15.01
Medium level
OTVM OTVM 247.97 14.20 345.86 230.91 98.51 21.02 97.74 66.09 587.47
OTVM OTVM 48.59 275.62 416.63 37.29 17.25 10.19 17.03 36.06 80.38
OTVM Ours 36.84 209.77 333.61 27.52 13.04 8.63 12.69 32.95 70.84
FTP-VM FTP-VM 40.46 32.59 287.53 28.14 15.80 12.18 15.13 32.96 125.73
FTP-VM FTP-VM 26.86 28.73 318.43 15.57 10.52 12.39 9.95 32.64 126.14
FTP-VM Ours 18.34 11.02 234.39 9.39 6.97 6.83 6.59 26.39 50.31
Hard level
OTVM OTVM 412.41 231.38 777.06 389.68 146.76 29.97 146.11 90.15 764.36
OTVM OTVM 140.96 1243.20 903.79 126.29 47.98 17.60 47.84 59.66 298.46
OTVM Ours 123.01 1083.71 746.38 111.16 41.52 16.41 41.24 55.78 257.28
FTP-VM FTP-VM 46.77 66.52 399.55 33.72 16.33 14.40 15.82 45.04 76.48
FTP-VM FTP-VM 48.11 95.17 459.16 35.56 16.51 14.87 16.12 45.29 78.66
FTP-VM Ours 30.12 62.55 326.61 19.13 10.37 8.61 10.07 36.81 66.49
24
Image + Mask Groundtruth
Frame t
* (t)|
log |𝐀(t) − 𝐀
Frame t+1
* + 1)|
log |𝐀(t + 1) − 𝐀(t
* (t) − 𝐀
log |𝐀 * (t + 1)|
Single ✓
Conv-GRU
Bidirectional ✓ ✓ ✓
f
 ✓ ✓
Fusion
Âb ✓
Figure 18. The effectiveness of different temporal components on the medium level of V-HIM60. Conv-GRU can improve the
result a bit, but not perfect. Our proposed fusion strategy improves the output in both foreground and background regions. The table
below denotes temporal components for each column. Red, blue arrows indicate the errors and improvements in comparison with the
result without any temporal module. We also visualize the error to the groundtruth (log |A − Â|) and the difference between consecutive
predictions(log |Â− Â|). The color-coded map (min-max range) to illustrate differences between consecutive frames is .
(Best viewed in color and digital zoom).
25
Input  log ∆  log ∆  log ∆  log ∆  log ∆Â
Frame
mask | {z } | {z } | {z } | {z } | {z }
InstMatt SparseMat MGM-TCVOM MGM⋆ -TCVOM Ours
Figure 19. Highlighted detail and consistency on natural video outputs. To watch the full videos, please check our website. We present
the foreground extracted and the difference to the previous frame output for each model. The color-coded map (min-max range) to illustrate
differences between consecutive frames is . Red arrows indicate the zoom-in region in the red square. (Best viewed in
color and digital zoom).
26