01 - Semantic Segmentation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Computer Vision; Image Transformation;

Semantic Segmentation YouTube Playlist

Maziar Raissi

Assistant Professor

Department of Applied Mathematics

University of Colorado Boulder

[email protected]
Fully Convolutional Networks for Semantic Segmentation
YouTube Playlist

Upsampling with factor f is convolution


with a fractional input stride of 1/f .
Backwards convolution (deconvolution)
with an output stride of f
Reverse the forward and backward
global information resolves what passes of convolution
local information resolves where Evaluation Metrics

nij ! number of pixels of class i


predicted to belong to class j
ncl !
X number of di↵erent classes
ti = nij ! total number of pixels
j of class i
The fully connected layers can also be
viewed as convolutions with kernels that
cover their entire input regions.

Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional networks for semantic segmentation." Proceedings of the IEEE conference on computer
vision and pattern recognition. 2015.
Learning Deconvolution Network
for Semantic Segmentation YouTube Video

Pre-defined fixed-size receptive field!

The detailed structures of an object are often


lost or smoothed because the label map, input
to the deconvolutional layer, is too coarse and
deconvolution procedure is overly simple.
Class conditional probability map (bicycle)

Instance-wise prediction!
gi 2 RW ⇥H⇥C ! output score maps of the i-th proposal
ut score maps of the i-th proposal
Gi ! zero padded outside gi
Batch Normalization
pixel-wise class score map
Two-stage Training:
(before softmax)
1) ground-truth bounding boxes and 2) object proposals ( 0.5 in IoU)
Noh, Hyeonwoo, Seunghoon Hong, and Bohyung Han. "Learning deconvolution network for semantic segmentation." Proceedings of the IEEE international
conference on computer vision. 2015.
U-Net: Convolutional Networks for
Biomedical Image Segmentation YouTube Playlist
X
L= w(x, y) log p`(x,y) (x, y)
(x,y)2⌦
` : ⌦ ! {1, . . . , K}
(x, y) 7! `(x, y)
true label of each pixel !
XK
pk (x, y) = exp(ak (x, y))/ exp(ak0 (x, y))
k0 =1
(d1 (x, y) + d2 (x, y))2
w(x, y) = wc (x, y) + w0 exp( 2
)
2
wc (x, y) ! weight map to balance class frequencies
w0 = 10 & ⇡ 5 pixels
d1 (x, y) ! distance to the border of the nearest cell
d2 (x, y) ! distance to the border of the second nearest cell
Essential data
augmentation: shift and
0 1 rotation invariances as
3 ⇥ 3 Conv X X X well as robustness to
deformations and gray
bx,y,` = ReLU @ wi,j,k,` ax+i,y+j,k + c` A value variations
i2{ 1,0,1} j2{ 1,0,1} k2{1,...,K}
2 ⇥ 2 maxpooling
bx,y,k = max a2x+i,2y+j,k ! stride = 2
i,j2{0,1}
0 1
2 ⇥ 2 up-conv X
b2x+i,2y+j,` = ReLU @ wi,j,k,` ax,y,k + c` A for i, j 2 {0, 1}
k2{1,...,K}
Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical
image computing and computer-assisted intervention. Springer, Cham, 2015.
DeepLab: Semantic Image Segmentation with Deep Convolutional
Nets, Atrous Convolution, and Fully Connected CRFs YouTube Playlist

Three challenges in the application of DCNNs to Atrous Spatial Pyramid Pooling (ASPP) Cityscapes
semantic image segmentation: (1) reduced feature
resolution, (2) existence of objects at multiple
scales, and (3) reduced localization accuracy due
to DCNN invariance.
Atrous Convolution
Reduce the degree of signal downsampling due to
max-pooling and striding (from 32x down to 8x).

Fully-Connected Conditional Random Fields (CRF)

OC
AL
V
G -16
S C VG
PA

VOC - 1 01
C AL s N et
PA
S Re

x ! label assignment for pixels


P (xi ) ! label assignment prob. at pixel i (DCNN)
rate parameter pi ! pixel position
energy function
Ii ! RGB color
Same number of parameters and amount of computation
µ(xi , xj ) = 1 i↵ xi 6= xj
Chen, Liang-Chieh, et al. "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs." IEEE transactions on
pattern analysis and machine intelligence 40.4 (2017): 834-848.
Conditional Random Fields as
Recurrent Neural Networks YouTube Playlist
<latexit sha1_base64="SybXXqWutDjz8tMueZ12rEFjrFw=">AAACNXicbVDLSsNAFJ3UV62vqks3g0VoUUpSxAcoFEVwWcE+oAlhMpm2004ezEykJeRb/Ay/wK2uXbhTt/6Ck7YLWz1w4XDOvdx7jxMyKqSuv2mZhcWl5ZXsam5tfWNzK7+90xBBxDGp44AFvOUgQRj1SV1SyUgr5AR5DiNNZ3Cd+s0HwgUN/Hs5Conloa5POxQjqSQ7f35THJbgJTRF5NkUmqGgdlQc2rQEDydiTOEF7CcTK0ytIzi0+yU7X9DL+hjwLzGmpACmqNn5T9MNcOQRX2KGhGgbeiitGHFJMSNJzowECREeoC5pK+ojjwgrHr+YwAOluLATcFW+hGP190SMPCFGnqM6PSR7Yt5Lxf+8diQ7Z1ZM/TCSxMeTRZ2IQRnANC/oUk6wZCNFEOZU3QpxD3GEpUp1Zosr0tMSlYsxn8Jf0qiUjZNy5e64UL2aJpQFe2AfFIEBTkEV3IIaqAMMHsEzeAGv2pP2rn1oX5PWjDad2QUz0L5/AA1+qgE=</latexit>

X X
E(x) = u (xi ) + p (xi , xj )
i i<j
E(x) ! energy of a label assignment x
<latexit sha1_base64="/P3aMbMMbJWFSUWPXfZRPL79LNw=">AAACNHicbVDLSiNBFK32MT5njDNLN4VRcDahO4i6FAdhlg4YFZIQbldudwqrq5qq25rQ5Ff8DL/Are6F2Q1u5xusjln4OlBwOPd16sS5ko7C8DGYmZ2b/7KwuLS8svr121pt/fuZM4UV2BJGGXsRg0MlNbZIksKL3CJkscLz+PJXVT+/Quuk0ac0yrGbQaplIgWQl3q1g+Od4U/esTIdEFhrrnmHcEglarTpiJuEA1cQo+LgnEx1hpr41nBr3KvVw0Y4Af9IoimpsylOerWnTt+IologlF/WjsKcuiVYkkLheLlTOMxBXEKKbU81ZOi65eSHY77tlT5PjPXPG5iorydKyJwbZbHvzIAG7n2tEj+rtQtKDrql1HlBqMXLoaRQnAyv4uJ9aVGQGnkCwkrvlYsBWBDkQ31zpe8qa1Uu0fsUPpKzZiPaazT/7NYPj6YJLbINtsl2WMT22SH7zU5Yiwl2w+7YPXsIboO/wb/g6aV1JpjO/GBvEPx/BttQq7Q=</latexit>

softmax
<latexit sha1_base64="hVy9jxTJiPnrOuCH23Pa9lTPyhM=">AAACD3icbVBLTsMwFHTKr5RfoEs2FhUSqyrpAlhWsGFZJPqR2qhyHKe1aseR7SCiKIfgBGzhBOwQW47AAbgHTpsFbRnJ0mjmPc/T+DGjSjvOt1XZ2Nza3qnu1vb2Dw6P7OOTnhKJxKSLBRNy4CNFGI1IV1PNyCCWBHGfkb4/uy38/iORioroQacx8TiaRDSkGGkjje36aP5HJkmQQyVCzdHT2G44TWcOuE7ckjRAic7Y/hkFAiecRBozpNTQdWLtZUhqihnJa6NEkRjhGZqQoaER4kR52Tw4h+dGCWAopHmRhnP170aGuFIp980kR3qqVr1C/M8bJjq89jIaxYkmEV4EhQmDWsCiCRhQSbBmqSEIS2puhXiKJMLa9LWUEqjitNz04q62sE56raZ72Wzdtxrtm7KhKjgFZ+ACuOAKtMEd6IAuwCAFL+AVvFnP1rv1YX0uRitWuVMHS7C+fgGHZp2r</latexit>

u (xi ) ! unary energy components


<latexit sha1_base64="x6AHS2Jp+ko96cx4aBQyaG5Ix38=">AAACMnicbVDLSgMxFM34rO+qSzfBIuimzIhUl6IblxWsCp0yZNLbNphJhuSOOgz9Ez/DL3CrP6A7EXd+hJnaha8DgcM593BvTpxKYdH3n72JyanpmdnK3PzC4tLySnV17dzqzHBocS21uYyZBSkUtFCghMvUAEtiCRfx1XHpX1yDsUKrM8xT6CSsr0RPcIZOiqqNMLUiyrZvI7FDQyP6A2TG6BsaItxikSlmcgoKTD+nXCepVqDQDqNqza/7I9C/JBiTGhmjGVXfw67mWeLSXDJr24GfYqdgBgWXMJwPMwsp41esD21HFUvAdorR/4Z0yyld2tPGPYV0pH5PFCyxNk9iN5kwHNjfXin+57Uz7B10CqHSDEHxr0W9TFLUtCyLdoUBjjJ3hHEj3K2UD5hhHF2lP7Z0bXla2Uvwu4W/5Hy3HjTqu6d7tcOjcUMVskE2yTYJyD45JCekSVqEkzvyQB7Jk3fvvXiv3tvX6IQ3zqyTH/A+PgHxm6yF</latexit>

<latexit sha1_base64="jECsszUi7ZUoa86Kv8c4GyeF/lA=">AAACUXicbVA9TxtBEF0fSSCQDwfKNKOYSEaKTncuQkoUmlQIJAxIxrLm9ubslfd2T7tzEZblX5afkYqSIk3yC+jYMy74yEgrPb03M2/2ZZVWnpPkuhWtvXj5an3j9ebWm7fv3rc/bJ95WztJfWm1dRcZetLKUJ8Va7qoHGGZaTrPpoeNfv6TnFfWnPKsomGJY6MKJZEDNWr3lWlkAq2mYcnE2hy6Kqb4C0jreQ9sAZW6Ig27ahcYp8qMQWPWEFejQHVtxhjMcyicLQHh8Ohob9TuJHGyLHgO0hXoiFUdj9p/LnMr65IMS43eD9Kk4uEcHSupabF5WXuqUE5xTIMADZbkh/Pl9xfwOTDB3rrwDMOSfTgxx9L7WZmFzhJ54p9qDfk/bVBz8W04V6aqmYy8NypqDWyhyRJy5UiyngWA0qlwK8gJOpQcIn3kkvvmtEXIJX2awnNw1ovTr3HvpNc5+L5KaEN8FJ9EV6RiXxyIH+JY9IUUv8SN+Cv+tX63biMRRfetUWs1syMeVbR1Bx1oscs=</latexit>

inverse likelihood (i.e., cost) of pixel i taking label xi (obtained from a CNN) <latexit sha1_base64="XuC2I2ulhpsP18N/y0kyR7MIMFA=">AAACOnicbVDLSgNBEJz1bXxFPXoZjEK8hN0c1JMEvXjyAeYBSQizsx0zOLO7zPQKYcnf+Bl+gVe9eBU8iFc/wNlNDkZtaCiqurua8mMpDLruqzMzOze/sLi0XFhZXVvfKG5uNUyUaA51HslIt3xmQIoQ6ihQQivWwJQvoenfnWV68x60EVF4g8MYuordhqIvOENL9YonnfxGqiEY0Ri0SjAaQKCZpJIhCg5UqFiCghDzFVreuyxfHOxRFAoOesWSW3Hzon+BNwElMqmrXvG9E0Q8yc5xyYxpe26M3ZRpayVhVOgkBmLG79gttC0MmQLTTfMfR3TfMgHtR9p2iDRnf26kTBkzVL6dVAwH5reWkf9p7QT7x91UhHGCEPKxUT+RFCOahUYDoYGjHFrAuBb2V8oHTDOONtopl8Bkr41sLt7vFP6CRrXiHVaq19VS7XSS0BLZIbukTDxyRGrknFyROuHkgTyRZ/LiPDpvzofzOR6dcSY722SqnK9vRg+uVQ==</latexit>

permutohedral lattice implementation (O(N ) time)


<latexit sha1_base64="jECsszUi7ZUoa86Kv8c4GyeF/lA=">AAACUXicbVA9TxtBEF0fSSCQDwfKNKOYSEaKTncuQkoUmlQIJAxIxrLm9ubslfd2T7tzEZblX5afkYqSIk3yC+jYMy74yEgrPb03M2/2ZZVWnpPkuhWtvXj5an3j9ebWm7fv3rc/bJ95WztJfWm1dRcZetLKUJ8Va7qoHGGZaTrPpoeNfv6TnFfWnPKsomGJY6MKJZEDNWr3lWlkAq2mYcnE2hy6Kqb4C0jreQ9sAZW6Ig27ahcYp8qMQWPWEFejQHVtxhjMcyicLQHh8Ohob9TuJHGyLHgO0hXoiFUdj9p/LnMr65IMS43eD9Kk4uEcHSupabF5WXuqUE5xTIMADZbkh/Pl9xfwOTDB3rrwDMOSfTgxx9L7WZmFzhJ54p9qDfk/bVBz8W04V6aqmYy8NypqDWyhyRJy5UiyngWA0qlwK8gJOpQcIn3kkvvmtEXIJX2awnNw1ovTr3HvpNc5+L5KaEN8FJ9EV6RiXxyIH+JY9IUUv8SN+Cv+tX63biMRRfetUWs1syMeVbR1Bx1oscs=</latexit>

inverse likelihood (i.e., cost) of pixel i taking label xi (obtained from a CNN)
p (xi , xj ) ! pairwise energy component
<latexit sha1_base64="l199acV8JQES7LrlhMj2m3QHndY=">AAACOXicbVBBSxtBGJ21rbVa7VqPvQwNQgol7AZpexGCvfSoYDSQDcvs5EsyZnZmmPm2SVjya/wZ/gKv7cljDwXptX/A2ZiDRh8MPN77Ht83LzNSOIyim2DtxctX66833mxuvd3eeRfuvj9zurAc2lxLbTsZcyCFgjYKlNAxFlieSTjPxt8r//wnWCe0OsWZgV7OhkoMBGfopTQ8TIwTqalPU/GZTtOLTzSxYjhCZq2e0ARhiqVhwk6EAwoK7HBGuc6NVqBwnoa1qBEtQJ+SeElqZInjNPyb9DUvch/mkjnXjSODvZJZFFzCfDMpHBjGx2wIXU8Vy8H1ysU353TfK3060NY/hXShPkyULHdulmd+Mmc4cqteJT7ndQscfOuVQpkCQfH7RYNCUtS06oz2hQWOcuYJ41b4WykfMcs4+mYfbem76rSql3i1hafkrNmIvzSaJwe11tGyoQ3ygXwkdRKTr6RFfpBj0iacXJJr8ov8Dq6CP8Ft8O9+dC1YZvbIIwT/7wAjda8V</latexit>

1 ⇥ 1 conv
<latexit sha1_base64="A22DEUAaESGW+s1fnqRQJG7kInU=">AAACGXicbVDLSsNAFJ34rPUVdSnCYCu4KkkX6rLoxmUF+4AmlMlk0g6dTMLMpFBCVn6GX+BWv8CduHXlB/gfTtIsbOuBgcM59849HC9mVCrL+jbW1jc2t7YrO9Xdvf2DQ/PouCujRGDSwRGLRN9DkjDKSUdRxUg/FgSFHiM9b3KX+70pEZJG/FHNYuKGaMRpQDFSWhqaZ07xRyqIn8G6DR1FQyKhXYc44tOhWbMaVgG4SuyS1ECJ9tD8cfwIJyHhCjMk5cC2YuWmSCiKGcmqTiJJjPAEjchAU470MTctImTwQis+DCKhH1ewUP9upCiUchZ6ejJEaiyXvVz8zxskKrhxU8rjRBGO54eChEEVwbwT6FNBsGIzTRAWVGeFeIwEwko3t3DFl3m0TPdiL7ewSrrNhn3VaD40a63bsqEKOAXn4BLY4Bq0wD1ogw7A4Am8gFfwZjwb78aH8TkfXTPKnROwAOPrF9dHoEs=</latexit>

cost of assigning labels xi , xj to pixels i, j, simultaneously.


<latexit sha1_base64="wBXJ6gaQ0FDK+1JmLLfEejjJhi0=">AAACQHicbVA9SwNBEN3z2/gVtbRZTASLEO5SqGVQC0sFkwgxHHububi6t3vszklCyB/yZ/gLbLUX7MTWyr2YQo0DA4/3ZngzL0qlsOj7z97M7Nz8wuLScmFldW19o7i51bQ6MxwaXEttriJmQQoFDRQo4So1wJJIQiu6O8n11j0YK7S6xEEKnYT1lIgFZ+iosHjKtUWqY8qsFU5RPSpZBNLScj8UFdoPb8sUNU1Ff0yKym25Qq1IMolMgc6sHFTDYsmv+uOi0yCYgBKZ1HlYfL3uap4loJBL59wO/BQ7Q2ZQcAmjwnVmIWX8jvWg7aBiCdjOcPztiO45pktjbVwrpGP258aQJdYOkshNJgxv7F8tJ//T2hnGR52hUGmGoPi3UZzJ/P08OtoVBjjKgQOMG+FupfyGGcbRBfzLpWvz00Yul+BvCtOgWasGB9XaRa1UP54ktER2yC7ZJwE5JHVyRs5Jg3DyQJ7IM3nxHr037937+B6d8SY72+RXeZ9fTYyvxQ==</latexit>

cost of assigning labels xi , xj to pixels i, j, simultaneously.


<latexit sha1_base64="wBXJ6gaQ0FDK+1JmLLfEejjJhi0=">AAACQHicbVA9SwNBEN3z2/gVtbRZTASLEO5SqGVQC0sFkwgxHHububi6t3vszklCyB/yZ/gLbLUX7MTWyr2YQo0DA4/3ZngzL0qlsOj7z97M7Nz8wuLScmFldW19o7i51bQ6MxwaXEttriJmQQoFDRQo4So1wJJIQiu6O8n11j0YK7S6xEEKnYT1lIgFZ+iosHjKtUWqY8qsFU5RPSpZBNLScj8UFdoPb8sUNU1Ff0yKym25Qq1IMolMgc6sHFTDYsmv+uOi0yCYgBKZ1HlYfL3uap4loJBL59wO/BQ7Q2ZQcAmjwnVmIWX8jvWg7aBiCdjOcPztiO45pktjbVwrpGP258aQJdYOkshNJgxv7F8tJ//T2hnGR52hUGmGoPi3UZzJ/P08OtoVBjjKgQOMG+FupfyGGcbRBfzLpWvz00Yul+BvCtOgWasGB9XaRa1UP54ktER2yC7ZJwE5JHVyRs5Jg3DyQJ7IM3nxHr037937+B6d8SY72+RXeZ9fTYyvxQ==</latexit>

1 ⇥ 1 conv
<latexit sha1_base64="A22DEUAaESGW+s1fnqRQJG7kInU=">AAACGXicbVDLSsNAFJ34rPUVdSnCYCu4KkkX6rLoxmUF+4AmlMlk0g6dTMLMpFBCVn6GX+BWv8CduHXlB/gfTtIsbOuBgcM59849HC9mVCrL+jbW1jc2t7YrO9Xdvf2DQ/PouCujRGDSwRGLRN9DkjDKSUdRxUg/FgSFHiM9b3KX+70pEZJG/FHNYuKGaMRpQDFSWhqaZ07xRyqIn8G6DR1FQyKhXYc44tOhWbMaVgG4SuyS1ECJ9tD8cfwIJyHhCjMk5cC2YuWmSCiKGcmqTiJJjPAEjchAU470MTctImTwQis+DCKhH1ewUP9upCiUchZ6ejJEaiyXvVz8zxskKrhxU8rjRBGO54eChEEVwbwT6FNBsGIzTRAWVGeFeIwEwko3t3DFl3m0TPdiL7ewSrrNhn3VaD40a63bsqEKOAXn4BLY4Bq0wD1ogw7A4Am8gFfwZjwb78aH8TkfXTPKnROwAOPrF9dHoEs=</latexit>

encourage assigning similar labels to pixels with similar properties


<latexit sha1_base64="LPOwKKwEXOlkKlKzZcGA2Iz7gVg=">AAACQHicbVC7SgNBFJ31GeNr1dJmMAhWYTeFWga1sIxgHpCEMDu5SYbMzg4zs2pY8kN+hl9gG3vBTmytnE0WMYkXLhzOuc8TSM608byJs7K6tr6xmdvKb+/s7u27B4c1HcWKQpVGPFKNgGjgTEDVMMOhIRWQMOBQD4bXqV5/AKVZJO7NSEI7JH3BeowSY6mOewOC2lmkD5hozawm+lizkHGiMCcBcI1NhCV7StEjM4NfVapIgjIMdMcteEVvGngZ+BkooCwqHfe91Y1oHIIwlNu9Td+Tpp0QO41yGOdbsQZJ6NCe1bRQkBB0O5l+O8anluniXqRsCoOn7N+OhIRaj8LAVobEDPSilpL/ac3Y9C7bCRMyNtaV2aJezNP/U+twlymgho8sIFQxeyumA6IINdbguS1dnZ42tr74iy4sg1qp6J8XS3elQvkqcyiHjtEJOkM+ukBldIsqqIooekavaILenBfnw/l0vmalK07Wc4Tmwvn+AUeUshc=</latexit>

<latexit sha1_base64="zl0klpUc6+dY0eetcXTCF7qx+Qw=">AAACJ3icbZDNSgMxFIUz9f+/6tJNsAiKUGa6UJfFgrisxdpCO5RM5lZDM8mQZIQyzEP4GD6BW30Cd6JLF76HmbELWz0QOJx7b+7lC2LOtHHdD6c0N7+wuLS8srq2vrG5Vd7eudEyURTaVHKpugHRwJmAtmGGQzdWQKKAQycYNfJ65x6UZlJcm3EMfkRuBRsySoyNBuXjfvFHqiDMGlKELI8Jxy0iQhnhCwY81Piw0brQR4Nyxa26hfBf401MBU3UHJS/+qGkSQTCUE607nlubPyUKMMoh2y1n2iICR2RW+hZK0gE2k+LgzJ8YJMQD6WyTxhcpL8nUhJpPY4C2xkRc6dna3n4X62XmOGZnzIRJwYE/Vk0TDg2EueEcMgUUMPH1hCqLBCK6R1RhBrLcWpLqPPTMsvFm6Xw19zUqt5JtXZVq9TPJ4SW0R7aR4fIQ6eoji5RE7URRQ/oCT2jF+fReXXenPef1pIzmdlFU3I+vwHtMaak</latexit>

Conditional Random Fields encourage (CRFs) assigning similar labels to pixels with similar properties <latexit sha1_base64="LPOwKKwEXOlkKlKzZcGA2Iz7gVg=">AAACQHicbVC7SgNBFJ31GeNr1dJmMAhWYTeFWga1sIxgHpCEMDu5SYbMzg4zs2pY8kN+hl9gG3vBTmytnE0WMYkXLhzOuc8TSM608byJs7K6tr6xmdvKb+/s7u27B4c1HcWKQpVGPFKNgGjgTEDVMMOhIRWQMOBQD4bXqV5/AKVZJO7NSEI7JH3BeowSY6mOewOC2lmkD5hozawm+lizkHGiMCcBcI1NhCV7StEjM4NfVapIgjIMdMcteEVvGngZ+BkooCwqHfe91Y1oHIIwlNu9Td+Tpp0QO41yGOdbsQZJ6NCe1bRQkBB0O5l+O8anluniXqRsCoOn7N+OhIRaj8LAVobEDPSilpL/ac3Y9C7bCRMyNtaV2aJezNP/U+twlymgho8sIFQxeyumA6IINdbguS1dnZ42tr74iy4sg1qp6J8XS3elQvkqcyiHjtEJOkM+ukBldIsqqIooekavaILenBfnw/l0vmalK07Wc4Tmwvn+AUeUshc=</latexit>

Model pixel labels as random variables that form


<latexit sha1_base64="CeurhTVqa1AH6fKtXTjFsLO65Q4=">AAACo3icbVFtaxNBEN4732q0GvWjXwZDIQU5LhHUDwpFoQgSqKFpC7kjzO1OmiV7u8fuXjSE/B5/kz/A/+FeEsS0DuzyMDPPvDxTVEo6n6a/ovjO3Xv3Hxw8bD16fPjkafvZ8wtnastpxI0y9qpAR0pqGnnpFV1VlrAsFF0W889N/HJB1kmjz/2yorzEay2nkqMPrkn7Z6aN1IK0h4ERpKCSP8KvsCDlAB1Y1MKUsEArMRR14GfoYWpsmWUt+GsIA7Rzs4DhNv9UkhLQHQxPj+H7jDRwo4VsepLYI9aV0YF9rUyBCkzhyC42s0FXJpS8BhkmpuNJu5Mm6cbgNujtQIft7GzS/p0Jw+sybMYVOjfupZXPV2i95IrWrax2VCGfh+LjADWW5PLVRtA1HAWPaJYMLyiz8f7LWGHp3LIsQmaJfuZuxhrn/2Lj2k/f5yupq9qT5ttG01qBN9BcB4S0xL1aBoDcBr048Bla5D7ccK+LcM1o66BL76YKt8FFP+m9Tfrf+p2TTzuFDthL9op1WY+9YyfsCztjI8ajw+hN9CH6GB/FX+NhfL5NjaMd5wXbszj/A3JPyTw=</latexit>

a Markov Random Field (MRF) when conditioned


upon a global observation (i.e., image) softmax
<latexit sha1_base64="hVy9jxTJiPnrOuCH23Pa9lTPyhM=">AAACD3icbVBLTsMwFHTKr5RfoEs2FhUSqyrpAlhWsGFZJPqR2qhyHKe1aseR7SCiKIfgBGzhBOwQW47AAbgHTpsFbRnJ0mjmPc/T+DGjSjvOt1XZ2Nza3qnu1vb2Dw6P7OOTnhKJxKSLBRNy4CNFGI1IV1PNyCCWBHGfkb4/uy38/iORioroQacx8TiaRDSkGGkjje36aP5HJkmQQyVCzdHT2G44TWcOuE7ckjRAic7Y/hkFAiecRBozpNTQdWLtZUhqihnJa6NEkRjhGZqQoaER4kR52Tw4h+dGCWAopHmRhnP170aGuFIp980kR3qqVr1C/M8bJjq89jIaxYkmEV4EhQmDWsCiCRhQSbBmqSEIS2puhXiKJMLa9LWUEqjitNz04q62sE56raZ72Wzdtxrtm7KhKjgFZ+ACuOAKtMEd6IAuwCAFL+AVvFnP1rv1YX0uRitWuVMHS7C+fgGHZp2r</latexit>

Gaussian kernel applied on feature vectors


<latexit sha1_base64="903GUiRaAUjQftDYOxXfeC9KoAU=">AAACJnicbVDLSgNBEJz1bXxFPXoZDIJ4CLs5qMegBz0qmCgkS+id7dUhszPLzGwghPyDn+EXeNUv8CbizYv/4WySgyY2NBRV3VR3RZngxvr+pzc3v7C4tLyyWlpb39jcKm/vNI3KNcMGU0LpuwgMCi6xYbkVeJdphDQSeBt1zwv9tofacCVvbD/DMIV7yRPOwDqqUz66gNwYDpJ2UUsUFDJnizFVkiYINtdIe8is0qZTrvhVf1R0FgQTUCGTuuqUv9uxYnmK0jIBxrQCP7PhALTlTOCw1M4NZsC6cI8tByWkaMLB6KchPXBMTBOlXUtLR+zvjQGkxvTTyE2mYB/MtFaQ/2mt3Can4YDLLLco2dgoyQW1ihYB0Zhr96/oOwBMc3crZQ+ggVkX4x+X2BSnDV0uwXQKs6BZqwbH1dp1rVI/myS0QvbIPjkkATkhdXJJrkiDMPJInskLefWevDfv3fsYj855k51d8qe8rx/2AqbL</latexit>

Xi ! random variable associated to pixel i


<latexit sha1_base64="CDsg/e+Yw38f1nf4jaA9TTPOm2M=">AAACOnicbVDLTttAFB1T2qaUltAuuxk1VGIV2QgBK4TaDcsgNSFSYkXX45tkxHjGmrmmRFb+pp/BF7AtG7ZIXSC2fEDHiRflcaQrHZ37PkmupKMwvAlWXq2+fvO28W7t/fqHjxvNzU89ZworsCuMMrafgEMlNXZJksJ+bhGyROFpcvajyp+eo3XS6J80yzHOYKLlWAogL42ah/2R5EMrJ1MCa80vPiS8oNKCTk3Gz8FK8KM4OGeEBMKUk+G5vEDFt+TWfNRshe1wAf6cRDVpsRqdUfPvMDWiyFCTUH7qIApzikuwJIXC+dqwcJiDOIMJDjzVkKGLy8Wfc/7NKykfG+tDE1+o/3eUkDk3yxJfmQFN3dNcJb6UGxQ0PohLqfOCUIvlonGhql8r03gqLQpSM09AWOlv5WIKFgR5ax9tSV11WuVL9NSF56S304722jsnu62j77VDDfaFfWXbLGL77Igdsw7rMsF+syv2h10Hl8FtcBfcL0tXgrrnM3uE4OEfHAeu5g==</latexit>

! derived from image features such as spatial location and RGB values
<latexit sha1_base64="zGb1+YWZifkt7ESdrO8mFieZhp4=">AAACUnicbVLLbhMxFHVSHn1B07Ls5oqoEqtopkLQZVUWsCyIpJWSKLrjudNY9dgj+zoQjfJn/Qw2bCt1BV/ACk+SBW25kuWjc1/HR84qrTwnyc9We+PJ02fPN7e2d3ZfvNzr7B8MvA1OUl9abd1lhp60MtRnxZouK0dYZpoususPTf5iRs4ra77yvKJxiVdGFUoiR2rSGYycupoyOme/wYjpO9c5OTWjHApnS1CxnqAg5ODIgw9yChjvKvajBm1XgwBNDl8+nsEMdSC/mHS6SS9ZBjwG6Rp0xTrOJ527UW5lKMmw1Oj9ME0qHtfoWElNi+1R8FShvI5qhhEaLMmP6+X7F3AUmajXungMw5L9t6PG0vt5mcXKEnnqH+Ya8n+5YeDiZFwrUwUmI1eLiqCBLTRmQq4cSdbzCFA6FbWCnKJDydHye1ty30hrfEkfuvAYDI576bve8ee33dOztUOb4lC8Fm9EKt6LU/FJnIu+kOJG3Ipf4nfrR+tPO/6SVWm7te55Je5Fe/cvmWe2AQ==</latexit>

<latexit sha1_base64="zovy2/90hytfkvaay89QLR0+Pa4=">AAACKHicbVDLTsJAFJ3iC/FVdelmIphgYkjLQl0S3bjERJAECJlOLzBhOm1mpkbS8BN+hl/gVr/AnWFr4n84hS4EvMlNTs65z+NFnCntOFMrt7a+sbmV3y7s7O7tH9iHR00VxpJCg4Y8lC2PKOBMQEMzzaEVSSCBx+HRG92m+uMTSMVC8aDHEXQDMhCszyjRhurZF2UJpkGB0ArrIWBOPOCYKMUGAnysQxyxZ8OUWOm8ZxedijMLvArcDBRRFvWe/dPxQxoHZjrlZmbbdSLdTYjUjHKYFDqxgojQERlA20BBAlDdZPbVBJ8Zxsf9UJoUGs/Yvx0JCZQaB56pDIgeqmUtJf/T2rHuX3cTJqJYg6DzRf2Yp7+mFmGfSaCajw0gVDJzK6ZDIgnVxsiFLb5KT5sYX9xlF1ZBs1pxLyvV+2qxdpM5lEcn6BSVkYuuUA3doTpqIIpe0Bt6Rx/Wq/VpfVnTeWnOynqO0UJY37+17KZ2</latexit>

(represents the label assigned to pixel i)


! derived from image features such as spatial location and RGB values
<latexit sha1_base64="zGb1+YWZifkt7ESdrO8mFieZhp4=">AAACUnicbVLLbhMxFHVSHn1B07Ls5oqoEqtopkLQZVUWsCyIpJWSKLrjudNY9dgj+zoQjfJn/Qw2bCt1BV/ACk+SBW25kuWjc1/HR84qrTwnyc9We+PJ02fPN7e2d3ZfvNzr7B8MvA1OUl9abd1lhp60MtRnxZouK0dYZpoususPTf5iRs4ra77yvKJxiVdGFUoiR2rSGYycupoyOme/wYjpO9c5OTWjHApnS1CxnqAg5ODIgw9yChjvKvajBm1XgwBNDl8+nsEMdSC/mHS6SS9ZBjwG6Rp0xTrOJ527UW5lKMmw1Oj9ME0qHtfoWElNi+1R8FShvI5qhhEaLMmP6+X7F3AUmajXungMw5L9t6PG0vt5mcXKEnnqH+Ya8n+5YeDiZFwrUwUmI1eLiqCBLTRmQq4cSdbzCFA6FbWCnKJDydHye1ty30hrfEkfuvAYDI576bve8ee33dOztUOb4lC8Fm9EKt6LU/FJnIu+kOJG3Ipf4nfrR+tPO/6SVWm7te55Je5Fe/cvmWe2AQ==</latexit>

L = {l1 , l2 , . . . , lL } ! set of labels


<latexit sha1_base64="nsyhcQmIfD26k3A366DQRrOU51w=">AAACRHicbVBNT9tAEF2H8l0gtEcuq0ZIPaDIjhBwQUL0woFDkBpAiiNrvB4nq6y91u4YiCz/JX4Gv4BLD+2NG7eqV1Q7yaFAn7TS2/dmNDMvzJS05Lo/nMbCh8Wl5ZXVtfWPG5tbze1Pl1bnRmBPaKXNdQgWlUyxR5IUXmcGIQkVXoXjb7V/dYPGSp1+p0mGgwSGqYylAKqkoHnmJ0AjAao4L/kx9wsVeHtcBZ097qtIk60/537JfSOHIwJj9C33Ce+osEhcx1xBiMqWQbPltt0p+HvizUmLzdENmk9+pEWeYEpCgbV9z81oUIAhKRSWa35uMQMxhiH2K5pCgnZQTC8u+W6lRDzWpnop8an6b0cBibWTJKwq6/vsW68W/+f1c4qPBoVMs5wwFbNBca44aV7HxyNpUJCaVASEkdWuXIzAgKAq5FdTIluvVufivU3hPbnstL2Ddudiv3VyOk9ohe2wL+wr89ghO2FnrMt6TLB79sh+sl/Og/Ps/Hb+zEobzrznM3sF5+UvTcOx3g==</latexit>

µ ! label compatibility
<latexit sha1_base64="2zByXwBYDB4qCDzsfgvxqwy6UNw=">AAACJnicbVDLSgMxFM34tr6qLt0EiyAuyoyIuiy6calgW6FTSia9bUOTyZDcUcvQf/Az/AK3+gXuRNy58T/MtF2o9UDgcM653JsTJVJY9P0Pb2Z2bn5hcWm5sLK6tr5R3NyqWZ0aDlWupTY3EbMgRQxVFCjhJjHAVCShHvXPc79+C8YKHV/jIIGmYt1YdARn6KRW8SBUKQ2N6PaQGaPvaIhwj5lkEUjKtUpcLhJS4GDYKpb8sj8CnSbBhJTIBJet4lfY1jxVECOXzNpG4CfYzJhBwSUMC2FqIWG8z7rQcDRmCmwzG/1pSPec0qYdbdyLkY7UnxMZU9YOVOSSimHP/vVy8T+vkWLntJmJOEkRYj5e1EklRU3zgmhbGOAoB44wboS7lfIeM4yjq/HXlrbNT8t7Cf62ME1qh+XguHx4dVSqnE0aWiI7ZJfsk4CckAq5IJekSjh5IE/kmbx4j96r9+a9j6Mz3mRmm/yC9/kNR0KnmQ==</latexit>

X = (X1 , X2 , . . . , XN ) ! vector
<latexit sha1_base64="+zWKtL8OJVOKhTJQPnfuFMXSWwY=">AAACMXicbVDLSsNAFJ34tr6qLt0MFqFCKUnxtRFEN65EwdpAU8JkMm0HJ5kwc6OW0C/xM/wCt/oF7kRw5U84abuwrQcGzpxzL/feEySCa7DtD2tmdm5+YXFpubCyura+UdzcutMyVZTVqRRSuQHRTPCY1YGDYG6iGIkCwRrB/UXuNx6Y0lzGt9BLWCsinZi3OSVgJL946OJTXHZ9p4Jdv1bBnggl6PxztY89xTtdIErJR+wBe4LsgVGQqu8XS3bVHgBPE2dESmiEa7/47YWSphGLgQqiddOxE2hlRAGngvULXqpZQug96bCmoTGJmG5lg/P6eM8oIW5LZV4MeKD+7chIpHUvCkxlRKCrJ71c/M9rptA+aWU8TlJgMR0OaqcCg8R5VjjkytwreoYQqrjZFdMuUYSCSXRsSqjz1fJcnMkUpsldreocVWs3B6Wz81FCS2gH7aIyctAxOkOX6BrVEUXP6BW9oXfrxfqwPq2vYemMNerZRmOwfn4B7jipJA==</latexit>

arg min E(x) ! most probable label assignment


<latexit sha1_base64="vGNICDjyKHjawvrO4g2Bq/uuluc=">AAACPnicbVDLattAFB2leTVpUqdddjPUBNKNkUJoszQJhS4dqJOAZczV+FoeMg8xc9XYCP9PPqNf0G2bD0h3odsuO7K9aB4HBg7n3NecrFDSUxzfRisvVtfWNzZfbm2/2tl93dh7c+5t6QR2hVXWXWbgUUmDXZKk8LJwCDpTeJFdndb+xTd0XlrzlaYF9jXkRo6kAArSoHGSgst5qqUZTPjng8kHnjqZjwmcs9c8JZxQpa0nXjibQZjKFWSoOHgvc6PR0GzQaMateA7+lCRL0mRLdAaNu3RoRVk3CxUG9ZK4oH4FjqRQONtKS48FiCvIsReoAY2+X83/OuP7QRnykXXhGeJz9f+OCrT3U52FSg009o+9WnzO65U0Ou5X0hQloRGLRaNScbK8Do4PpUNBahoICCfDrVyMwYGgEO+DLUNfn1bnkjxO4Sk5P2wlH1uHZ0fN9skyoU32jr1nByxhn1ibfWEd1mWC3bAf7Cf7FX2Pfkf30Z9F6Uq07HnLHiD6+w+BjbCv</latexit>

N ! number of pixels in the image


<latexit sha1_base64="/MZ9/FSzagreu1bkPcLxSwY5nno=">AAACLnicbVDLSgMxFM34tr6qLt0Ei+CqzBRRl0U3rqSCbYW2lEx6xwYzyZDcUcvQ//Az/AK3+gWCC3El+Blm2i6seiBwOPeee29OmEhh0fffvJnZufmFxaXlwsrq2vpGcXOrYXVqONS5ltpchcyCFArqKFDCVWKAxaGEZnhzmtebt2Cs0OoSBwl0YnatRCQ4Qyd1i5Vz2jbiuo/MGH1H2wj3mKk0DsFQHdFE3IO0VCiKfaDCmWHYLZb8sj8C/UuCCSmRCWrd4me7p3kag0IumbWtwE+wkzGDgksYFtqphYTxGze85ahiMdhONvrbkO45pUcjbdxTSEfqT0fGYmsHceg6Y4Z9+7uWi//VWilGx51MqCRFUHy8KEolRU3zoGhPGOAoB44wboS7lfI+M4yji3NqS8/mp+W5BL9T+EsalXJwWK5cHJSqJ5OElsgO2SX7JCBHpErOSI3UCScP5Ik8kxfv0Xv13r2PceuMN/Fskyl4X99bpaoN</latexit>

x
I ! global observation (image)
<latexit sha1_base64="E8KcthbGqKbybJZX+MYuKn7CbJ8=">AAACK3icbVDLTgIxFO3gG1+oSzeNxAQ3OEOMujS60Z0mgiRASKdcoKEznbR3UDLhM/wMv8CtfoErjVv+ww6yEPAkTU7Oua8eP5LCoOt+OpmFxaXlldW17PrG5tZ2bme3YlSsOZS5kkpXfWZAihDKKFBCNdLAAl/Cg9+7Sv2HPmgjVHiPgwgaAeuEoi04Qys1c8c3tK5Fp4tMa/VI6whPmHSk8pmkyjeg++NCWhC2EY6GzVzeLbpj0HniTUieTHDbzI3qLcXjAELkkhlT89wIGwnTKLiEYbYeG4gY79npNUtDFoBpJOOPDemhVVq0rbR9IdKx+rcjYYExg8C3lQHDrpn1UvE/rxZj+7yRiDCKEUL+u6gdS4qKpinRltDAUQ4sYVwLeyvlXaYZR5vl1JaWSU9Lc/FmU5gnlVLROy2W7k7yF5eThFbJPjkgBeKRM3JBrsktKRNOnskreSPvzovz4Xw537+lGWfSs0em4Ix+AB2jqO0=</latexit>

Mean-field approximation to the CRF distribution


<latexit sha1_base64="mgv4RuJGjS8QqPuRuW91XFXP6PM=">AAACLHicbVDLSsNAFJ3UV62vqks3g0VwY0m6UJfFgrgRqthaaEOZTG7aoZNJmJmIpfQ3/Ay/wK1+gRsRl/odTtMsbOuFC4dz7uWee7yYM6Vt+8PKLS2vrK7l1wsbm1vbO8XdvaaKEkmhQSMeyZZHFHAmoKGZ5tCKJZDQ43DvDWoT/f4BpGKRuNPDGNyQ9AQLGCXaUN2ifQ1EnAQMuI9JHMvokYWphHWEdR9w7fYS+8aHZF4yXSnZZTstvAicDJRQVvVu8bvjRzQJQWjKiVJtx461OyJSM8phXOgkCmJCB6QHbQMFCUG5o/SzMT4yjI+DSJoWGqfs340RCZUahp6ZNLb7al6bkP9p7UQH5+6IiTjRIOj0UJDw9GsTk3lZAtV8aAChkhmvmPaJJFSbMGeu+GpibWxyceZTWATNStk5LVduKqXqRZZQHh2gQ3SMHHSGqugK1VEDUfSEXtArerOerXfr0/qajuasbGcfzZT18wtLGqj5</latexit>

Y <latexit sha1_base64="eqKzL6md3W5uNblSovtUk6LHB88=">AAACIHicbVDLSsNAFJ3UV62vqEs3g1VoNyUpom6EohuXLdgHNCFMJpN26OTBzERaQn/Az/AL3OoXuBOXuvc/nLRZ2NYDA+eecy/3znFjRoU0jC+tsLa+sblV3C7t7O7tH+iHRx0RJRyTNo5YxHsuEoTRkLQllYz0Yk5Q4DLSdUd3md99JFzQKHyQk5jYARqE1KcYSSU5+lmzMq5CC8Uxj8awlRU30FKF51DYcmhl7NCqo5eNmjEDXCVmTsogR9PRfywvwklAQokZEqJvGrG0U8QlxYxMS1YiSIzwCA1IX9EQBUTY6ew3U3iuFA/6EVcvlHCm/p1IUSDEJHBVZ4DkUCx7mfif10+kf22nNIwTSUI8X+QnDMoIZtFAj3KCJZsogjCn6laIh4gjLFWAC1s8kZ02VbmYyymskk69Zl7W6q2LcuM2T6gITsApqAATXIEGuAdN0AYYPIEX8AretGftXfvQPuetBS2fOQYL0L5/AW5nohA=</latexit>

1
<latexit sha1_base64="MUR/GxgJSlN0whdGveSNZnIroJQ=">AAACJXicbVDLSgMxFM3UV62vqks3wSJMF5aZIupGKIpgdxXsAztDyaSZNjTzIMlIyzjf4Gf4BW71C9yJ4MqV/2Gm7cK2Hgg5nHMv997jhIwKaRhfWmZpeWV1Lbue29jc2t7J7+41RBBxTOo4YAFvOUgQRn1Sl1Qy0go5QZ7DSNMZXKV+84FwQQP/To5CYnuo51OXYiSV1MkXa3oLXsDhY7WoPsvlCMdmEt/r1WICLTIM9eNrPXWLnXzBKBljwEViTkkBTFHr5H+sboAjj/gSMyRE2zRCaceIS4oZSXJWJEiI8AD1SFtRH3lE2PH4pAQeKaUL3YCr50s4Vv92xMgTYuQ5qtJDsi/mvVT8z2tH0j23Y+qHkSQ+ngxyIwZlANN8YJdygiUbKYIwp2pXiPtIxSJVijNTuiJdLVG5mPMpLJJGuWSelsq3J4XK5TShLDgAh0AHJjgDFXADaqAOMHgCL+AVvGnP2rv2oX1OSjPatGcfzED7/gWcqqMQ</latexit>

P (X = x|I) = exp( E(x|I)) P (x) ⇡ Q(x) = Qi (xi )


Z(I) i
approximate maximum posterior marginal inference
<latexit sha1_base64="JsoiyFAQnnJVtmNr+5mOISwPDAA=">AAACLHicbVA5TgNBEJzlNLeBkGSEhURk7ToAQgsSQiNhg2RbVu+414yYSzOzCMvyN3gGLyCFF5AgRAjvYHwEGOioVNWtrqrUCO58HL9Fc/MLi0vLhZXVtfWNza3i9k7D6dwyrDMttL1OwaHgCuuee4HXxiLIVOBVens20q/u0Dqu1aXvG2xL6CmecQY+UJ1iDMZYfc8leKQSAsglNdp5tFzbwNgeVyAoVxlaVAw7xVJcjsdD/4JkCkpkOrVO8bPV1SyXqDwT4FwziY1vD8B6zgQOV1u5QwPsFnrYDFCBRNcejJMN6UFgujQLTjKtPB2zPy8GIJ3ryzRshgQ37rc2Iv/TmrnPTtoDrkzuQ6rJoywX1Gs6qol2uUXmRT8AYJYHr5TdgAUWipn90nUja8PQS/K7hb+gUSknR+XKRaVUPZ02VCB7ZJ8ckoQckyo5JzVSJ4w8kCfyTF6ix+g1eo8+Jqtz0fRml8xM9PUNGTWqBg==</latexit>

Z(I) ! partition function


<latexit sha1_base64="oWp8yyUbQPYViJp4/uaxEjaTmsA=">AAACJnicbVDLTgIxFO34RHyNunTTSEzQBZkhRl0S3egOE3lEIKRTOtDQ6UzaOyqZ8A9+hl/gVr/AnTHu3PgfdoCFgCdpcnLOffV4keAaHOfLWlhcWl5Zzaxl1zc2t7btnd2qDmNFWYWGIlR1j2gmuGQV4CBYPVKMBJ5gNa9/mfq1e6Y0D+UtDCLWCkhXcp9TAkZq28d3+esj3FS82wOiVPiAm8AeIYmIMtNMCfZjSVMybNs5p+CMgOeJOyE5NEG5bf80OyGNAyaBCqJ1w3UiaCXpZCrYMNuMNYsI7ZMuaxgqScB0Kxn9aYgPjdLBfqjMk4BH6t+OhARaDwLPVAYEenrWS8X/vEYM/nkr4TKKgUk6XuTHAkOI04BwhytGQQwMIVSZDCimPaIIBRPj1JaOTk9Lc3FnU5gn1WLBPS0Ub05ypYtJQhm0jw5QHrnoDJXQFSqjCqLoCb2gV/RmPVvv1of1OS5dsCY9e2gK1vcvXsSnFQ==</latexit>

CRF-RNN 5 iterations in training and 10 in testing!


<latexit sha1_base64="PT9thxQFN1UQ8O72mXuwZnhvRPw=">AAACDnicbVDNTsJAGNziH+JfxaOXjcTEi6TloB6JJMYTQSJCAg3ZbrewYbttdrdG0vQdfAKv+gTejFdfwQfwPdxCDwJOsslk5vt2vowbMSqVZX0bhbX1jc2t4nZpZ3dv/8A8LD/IMBaYdHDIQtFzkSSMctJRVDHSiwRBgctI1500Mr/7SISkIb9X04g4ARpx6lOMlJaGZnkw+yMRxEsb7ZvzdrM5NCtW1ZoBrhI7JxWQozU0fwZeiOOAcIUZkrJvW5FyEiQUxYykpUEsSYTwBI1IX1OOAiKdZJabwlOteNAPhX5cwZn6dyNBgZTTwNWTAVJjuexl4n9eP1b+lZNQHsWKcDwP8mMGVQizIqBHBcGKTTVBWFB9K8RjJBBWuq6FFE9mp6W6F3u5hVXyUKvaF9XaXa1Sv84bKoJjcALOgA0uQR3cghboAAyewAt4BW/Gs/FufBif89GCke8cgQUYX798P5x1</latexit>

<latexit sha1_base64="rGL99I6hUUW6TTf6xbQGD4K6s5o=">AAACJ3icbVDLSgNBEJyNrxhfUY9eRoMgCGE34OMY9OIxgnlAEsLsbCcZMju7zPQKIeQj/Ay/wKt+gTfRowf/w8kmB5NYMFBd1U33lB9LYdB1v5zMyura+kZ2M7e1vbO7l98/qJko0RyqPJKRbvjMgBQKqihQQiPWwEJfQt0f3E78+iNoIyL1gMMY2iHrKdEVnKGVOvnzCyoQdFoZKhRFzYQSqkeZCqjnphIYtMpxJ19wi24Kuky8GSmQGSqd/E8riHgSgkIumTFNz42xPWIaBZcwzrUSAzHjA9aDpqWKhWDao/RTY3pqlYB2I22fQpqqfydGLDRmGPq2M2TYN4veRPzPaybYvW6PhIoTBMWni7qJpBjRSUI0EBo4yqEljGthb6W8zzTjNqf5LYGZnDa2uXiLKSyTWqnoXRZL96VC+WaWUJYckRNyRjxyRcrkjlRIlXDyRF7IK3lznp1358P5nLZmnNnMIZmD8/0LkNyl3w==</latexit>

N
<latexit sha1_base64="ATuDd0VjNHuUHpyCc+U3m2vlNkA=">AAACR3icbVBBSxtBGJ2NVaNWG+3Ry9BYiJewK1I9hhahRREFEwPZGGYn324GZ2eWmW/bhDU/qj+jv6B404Nnb+LR3ZiDRh8MPN77Pr43L0iksOi6105p7sP8wmJ5aXnl4+rap8r6Rsvq1HBoci21aQfMghQKmihQQjsxwOJAwnlw+aPwz3+DsUKrMxwl0I1ZpEQoOMNc6lUOD2q/robb1DciGiAzRv+hPsIQM1BgohHVIcUBUK5VKKLUTNbo1pD6QlE/ZjjgTGZH44vjrXGvUnXr7gT0LfGmpEqmOOlV7vy+5mkMCrlk1nY8N8FuxgwKLmG87KcWEsYvWQSdnCoWg+1mk0+P6ddc6dNQm/wppBP15UbGYmtHcZBPFjHtrFeI73mdFMP9biZUkiIo/nwoTCVFTYsGaV8Y4ChHOWHciDwr5QNmGMe851dX+raIVvTizbbwlrR26t63+s7pbrXxfdpQmWySL6RGPLJHGuQnOSFNwslf8p/ckFvnn3PvPDiPz6MlZ7rzmbxCyXkCKkiy0A==</latexit>

E(I|x) ! energy of the configuration x 2 L


Drop conditioning on I for convenience!
<latexit sha1_base64="F5Ony2LIdjQPFsS+bBtuJfizo6k=">AAACJXicbZC/TsMwEMad8v9/gZHF0CLBUiUdgBEBA2xFooBUospxLq1Vx45sp1IV9Rl4DJ6AFZ6ADSExMfEeOG0GaLnp0/fd6e5+QcKZNq776ZRmZufmFxaXlldW19Y3yptbt1qmikKTSi7VfUA0cCagaZjhcJ8oIHHA4S7onef5XR+UZlLcmEECfkw6gkWMEmOtdvnwQskEUylClhtMdLAUuHpVxZFUud8HwUBQ2G2XK27NHRWeFl4hKqioRrv8/RBKmsYgDOVE65bnJsbPiDKMchguP6QaEkJ7pAMtKwWJQfvZ6KUh3rdOOLohksLgkft7IiOx1oM4sJ0xMV09meXmf1krNdGJnzGRpMa+NV4UpRwbiXM+OGQKqOEDKwhVFgrFtEsUocZS/LMl1PlpQ8vFm6QwLW7rNe+oVr+uV07PCkKLaAftoQPkoWN0ii5RAzURRY/oGb2gV+fJeXPenY9xa8kpZrbRn3K+fgBvBqVU</latexit>

! CRF parameters
<latexit sha1_base64="bT+y3GUAmMOH+t2yON9F7dCHAa8=">AAACHXicbVDLSgMxFM3UV62vUZduokVwVWaKqMtiQVxWsQ/oDCWTpm1o5kFyRy1D136GX+BWv8CduBU/wP8w087Cth4IHM65l3NzvEhwBZb1beSWlldW1/LrhY3Nre0dc3evocJYUlanoQhlyyOKCR6wOnAQrBVJRnxPsKY3rKZ+855JxcPgDkYRc33SD3iPUwJa6piHjuT9ARApwwfsAHuEpHp7hSMiic9AL447ZtEqWRPgRWJnpIgy1Drmj9MNaeyzAKggSrVtKwI3IRI4FWxccGLFIkKHpM/amgY6SLnJ5CtjfKyVLu6FUr8A8ET9u5EQX6mR7+lJn8BAzXup+J/XjqF34SY8iGJgAZ0G9WKBIcRpL7jLJaMgRpoQKrm+FdOBboGmJcykdFV6WtqLPd/CImmUS/ZZqXxzWqxcZg3l0QE6QifIRueogq5RDdURRU/oBb2iN+PZeDc+jM/paM7IdvbRDIyvX3B3o3Y=</latexit>

Fully Connected Pairwise CRF


<latexit sha1_base64="0WnVEcSi/efEcjcIPlhpXF83fZU=">AAACI3icbVDLTgIxFO3gC/E16tJNIzG6IjMs1CWRhLhEI0gCE9LpXKCh05m0HQ2Z8Al+hl/gVr/AnXHjwqX/YRlYCHiSJifnvk6PH3OmtON8WbmV1bX1jfxmYWt7Z3fP3j9oqiiRFBo04pFs+UQBZwIammkOrVgCCX0O9/6wOqnfP4BULBJ3ehSDF5K+YD1GiTZS1z7tZDtSCcG4lnA+wtVICKAaAlwnTD4yBbh6W+vaRafkZMDLxJ2RIpqh3rV/OkFEkxCEppwo1XadWHspkZpRDuNCJ1EQEzokfWgbKkgIykszM2N8YpQA9yJpntA4U/9OpCRUahT6pjMkeqAWaxPxv1o70b1LL2UiTjQIOj3USzjWEZ6kgwMmzd9NCgEjVDLjFdMBkcTkIeevBGpibWxycRdTWCbNcsk9L5VvysXK1SyhPDpCx+gMuegCVdA1qqMGougJvaBX9GY9W+/Wh/U5bc1Zs5lDNAfr+xdtQ6Vv</latexit>

Zheng, Shuai, et al. "Conditional random elds as recurrent neural networks." Proceedings of the IEEE international conference on computer vision. 2015.
fi
Multi-scale Context Aggregation by Dilated Convolutions
YouTube Playlist

F : Z2 ! R Size of the receptive field of each element in Fi+1 is (2i+2 1) ⇥ (2i+2 1).
discrete function
⌦r = [ r, r]2 \ Z2
k : ⌦r ! R
discrete filter of size (2r + 1)2
X
(F ⇤ k)(p) = F (p t)k(t)
t2⌦r
discrete convolution operator
X
(F ⇤` k)(p) = F (p `t)k(t) Context network architecture
t2⌦r
`-dilated convolution ReLU

algorithme à trous
(an algorithm for
wavelet decomposition)
uses dilated convolutions k b (t, a) = 1[t=0] 1[a=b] ! identity initialization
a ! index of the input feature map
Dilated convolutions support
exponentially expanding
b ! index of the output feature map
receptive fields without losing
! identity initialization (Large)
resolution or coverage.

a ! index of the input feature map


3 ⇥ 3 filters
C ! divides both ci & ci+1
Receptive field of an element p
Front End VGG-16
in Fi+1 is the set of elements in input: padded images (reflection padding)
F0 that modify the value of Fi+1 (p) output: 64 ⇥ 64 ⇥ 21 feature maps C = 21
Yu, Fisher, and Vladlen Koltun. "Multi-scale context aggregation by dilated convolutions." arXiv preprint arXiv:1511.07122 (2015).
SegNet: A Deep Convolutional Encoder-Decoder
Architecture for Image Segmentation YouTube Playlist

CamVid road scenes dataset

SUN RGB-D indoor scenes dataset

Badrinarayanan, Vijay, Alex Kendall, and Roberto Cipolla. "Segnet: A deep convolutional encoder-decoder architecture for image segmentation." IEEE transactions on
pattern analysis and machine intelligence 39.12 (2017): 2481-2495.
Pyramid Scene Parsing Network
YouTube Playlist

Scene parsing issues on ADE20K dataset Deep Supervision


Relationship
Mismatched
Categories

PASCAL VOC 2012 data


Confusion

Cityscapes dataset
Inconspicuous
Classes

ADE20K dataset contains 150 stuff/object category labels


(e.g., wall, sky, and tree) and 1,038 image-level scene
descriptors (e.g., airport terminal, bedroom, and street).

global context information

Zhao, Hengshuang, et al. "Pyramid scene parsing network." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
Rethinking Atrous Convolution for
Semantic Image Segmentation YouTube Video

Two challenges in applying Deep Convolutional Neural Networks (DCNNs):


– reduced feature resolution
– objects at multiple scales

Chen, Liang-Chieh, et al. "Rethinking atrous convolution for semantic image segmentation." arXiv preprint arXiv:1706.05587 (2017).
What Uncertainties Do We Need in Bayesian
Deep Learning for Computer Vision? YouTube Video

! marginal probability (cannot be evaluated analytically)


! a simple distribution approximating the posterior

! dropout distribution
p ! dropout probability
✓ ! parameters of the simple distribution (weight matrices)
Heteroscedastic Aleatoric Uncertainty (Regression)
ci
W 1 2 1 2
log p(yi |f (xi )) / ky i b
y i k + log bi
2bi2 2
2 ci
W
yi , bi ] = f (xi )
[b learned loss attenuation

! predictive variance
Aleatoric Uncertainty: noise inherent in the obsevations
– homoscedastic: constant for di↵erent inputs | {z }
– heteroscedastic: depends on the inputs to the model predictive mean
Epistemic (Model) Uncertainty: can be explained away given enough data
Epistemic Uncertainty in Bayesian Deep Learning Heteroscedastic Aleatoric Uncertainty (Classification)
! prior distribution over the weights of neural network Wci
p(yi |f (xi )) = yiT softmax(b yi + bi ✏i ), ✏i ⇠ N (0, I)
! random output of a Bayesian Neural Network T
1 X
! model likelihood p= yt + bt ✏t ), ✏t ⇠ N (0, I)
softmax(b
! dataset T t=1
! posterior over the weights (Bayesian inference) ! uncertainty of probability vector p
Each datapoint and each pixel will have its own prediction and uncertainty!
Kendall, Alex, and Yarin Gal. "What uncertainties do we need in bayesian deep learning for computer vision?." arXiv preprint arXiv:1703.04977 (2017).
Re neNet: Multi-Path Re nement Networks
for High-Resolution Semantic Segmentation YouTube Video

object parsing (left) and semantic segmentation (right)

su↵ers from downscaling of the feature maps

computationally expensive to train and quickly reachs memory limits


hs memory limits
E↵ectively
E↵ectively combine high-level combine
semantics andhigh-level
low-levelsemantics and
features to low-level features to produce
produce
el features to produce
high-resolution high-resolution
segmentation maps. segmentation maps.
Lin, Guosheng, et al. "Re nenet: Multi-path re nement networks for high-resolution semantic segmentation." Proceedings of the IEEE conference on computer vision
and pattern recognition. 2017.
fi
fi
fi
fi
Encoder-Decoder with Atrous Separable
Convolution for Semantic Image Segmentation YouTube Playlist

PASCAL VOC 2012 test set

DeepLabv3+

E↵ect of decoder 1 ⇥ 1 convolution

E↵ect of decoder 3 ⇥ 3 convolution

Chen, Liang-Chieh, et al. "Encoder-decoder with atrous separable convolution for semantic image segmentation." Proceedings of the European conference on
computer vision (ECCV). 2018.
Dual Attention Network for Scene Segmentation
YouTube Video

Dual Attention Network (DANet) exp(Bi · Cj ) Channel Attention Module


sji = PContext
– Cityscapes – PASCAL VOC2012 – PASCAL N – COCO Stu↵ A 2 RC⇥H⇥W ! local feature
12 – PASCAL Context – COCO Stu↵ exp(Bi · Cj ) A 2 RC⇥N ! reshape
PN i=1
Stu↵: sky, road, grass, etc. i=1 sji = 1 X = softmax(AAT ) 2 RC⇥C
Objects: person, car, bicycle, etc. D 2 RC⇥H⇥W ! after conv on A exp(Ai · Aj )
Position Attention Module C⇥N
xji = PC
C⇥H⇥W D 2 R ! reshape PC i=1 exp(Ai · Aj )
A2R ! local feature
i=1 xji = 1
T C⇥H⇥W
C⇥H⇥W DS 2 R ! after reshape
B, C 2 R ! after conv on A
E 2 R C⇥H⇥W X T A 2 RC⇥H⇥W ! after reshape
B, C 2 RC⇥N ! reshape (N = HW ) N C
X X
S 2 RN ⇥N ! spatial attention map Ej = ↵ sji Di + Aj Ej = xji Ai + Aj
S = softmax(B T C) i=1 i=1
Fu, Jun, et al. "Dual attention network for scene segmentation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.
Rethinking Semantic Segmentation from a
Sequence-to-Sequence Perspective with Transformers YouTube Video
<latexit sha1_base64="//5896XSonmTzxEtuUVicLfXkzA=">AAACV3icbVBNTxsxEHWWr5DSEsqRi2mERA+NdkECxAkRCfWAolQigEiiyOudJVa89soep4qi/Df+Bn+Aa/sPijfJoQGeZOl53ozmzYtzKSyG4XMpWFldW98ob1Y+bX3+sl3d+XprtTMc2lxLbe5jZkEKBW0UKOE+N8CyWMJdPGwU+t0IjBVa3eA4h17GHpVIBWfoS/3qQ1dpoRJQSK+clOMfDa1GWrpCZZI2AX9rM6SHV43md1osSigzfCAQODoD9pxei8z/EmqAQ45iBDQVIJP9Sr9aC+vhDPQ9iRakRhZo9asv3URzl3kzXDJrO1GYY2/CDAouYVrpOgs540P2CB1PFcvA9iazDKb0wBXeUm3888fMqv9PTFhm7TiLfWfGcGDfakXxI63jMD3rTYTKHYLi80WpkxQ1LQKlifCXoxx7wrgR3ivlA2YYRx/70pbEFtamRTDR2xjek9ujenRSP/51VLu4XERUJnvkGzkkETklF+QnaZE24eSJvJA/5G/pufQvWA/K89agtJjZJUsIdl4BwCC2lQ==</latexit> <latexit sha1_base64="Kj60CU3NOOVrsrheCgQnN9ClByQ=">AAACHHicbVDLSgMxFM3UV62vUZcuDBahbspMBXVZlILL2je0pWTSTBuayQxJRihDl/6GP+BW/8CduBX8Ab/DzHQWtvVC4OSce3NPjhMwKpVlfRuZtfWNza3sdm5nd2//wDw8akk/FJg0sc980XGQJIxy0lRUMdIJBEGew0jbmdzFevuRCEl93lDTgPQ9NOLUpRgpTQ3M03pl5BGukits1BCXri88ImChXmnULgZm3ipaScFVYKcgD9KqDsyf3tDHYfwmZkjKrm0Fqh8hoShmZJbrhZIECE/QiHQ15Mgjsh8lH5nBc80MoTagD1cwYf9ORMiTcuo5utNDaiyXtZj8T+uGyr3pR5QHoSIczxe5IYPKh3EqcEgFwYpNNUBYUO0V4jESCCud3cKWoYytzXI6GHs5hlXQKhXtq+LlQylfvk0jyoITcAYKwAbXoAzuQRU0AQZP4AW8gjfj2Xg3PozPeWvGSGeOwUIZX79xbKF0</latexit>

Fully-Convolutional Network (FCN) based architectures: Limited receptive field! SEgmentation TRansformer (SETR)
Benefits of adding more layers would diminish rapidly once reaching certain depths! Each Transformer layer has a global receptive field.
<latexit sha1_base64="m7D/264NnPg6phbFylIlnK1VwRw=">AAACWHicbZDLbhNBEEXbwyOJeZmwZNNgIbGyZhIJsozChmWQcBLJtqya7hq7lH6MumsI1sgfx2fAB8AW/oAexwuSUFJJV7fqqkqnrA1FzvPvveze/QcPd3b3+o8eP3n6bPB8/yz6JigcK298uCghoiGHYyY2eFEHBFsaPC8vP3Tz8y8YInn3mVc1ziwsHFWkgJM1H0ymjF+5PUGHFXGUvpKgNbmFtD6gNLBKYXnlG6OlJkuO4lIGqEmblfROoUzX1LILKAwM5KTGmpfx1bo/HwzzUb4peVcUWzEU2zqdD35OtVeNRcfKQIyTIq951kJgUgbX/WkTsQZ1CQucJOnAYpy1Gwhr+SY5WlY+pHYsN+6/iRZsjCtbpk0L6cHbs87832zScHU0a8nVDaNT14eqxkj2siOasARUnHBoAhUo/SrVEgIoTuhuXNGxe20DpriN4a44OxgV70aHnw6GxydbRLvipXgt3opCvBfH4qM4FWOhxDfxS/wWf3o/MpHtZHvXq1lvm3khblS2/xf52Lbc</latexit> <latexit sha1_base64="pdODa0JlDtA8Pu/l58z6FegKonw=">AAACMXicbVDLSsNAFJ34rPUVdelmsAiuQqKiLosiuKxgW6GGcjO5aYdOJmFmUiilP+Jv+ANu9Q+6Exdu/AmntQurXpjhcM693HNPlAuuje+PnYXFpeWV1dJaeX1jc2vb3dlt6KxQDOssE5m6j0Cj4BLrhhuB97lCSCOBzah3NdGbfVSaZ/LODHIMU+hInnAGxlJt9/QaWJfeKZA6yVSKigoY2L8LmgLtiCwCQRUyzA3vI004ithruxXf86dF/4JgBipkVrW2+/EQZ6xIURomQOtW4OcmHIIynAkclR8KjTmwHnSwZaGEFHU4nF43ooeWial1Z580dMr+nBhCqvUgjWxnCqarf2sT8j+tVZjkIhxymRcGJftelBSCmoxOoqIxt4cbMbAAmOLWK2VdUMCMDXRuS6wn1kZlG0zwO4a/oHHsBWfeye1xpXo5i6hE9skBOSIBOSdVckNqpE4YeSTP5IW8Ok/O2Hlz3r9bF5zZzB6ZK+fzC21zqj4=</latexit>

<latexit sha1_base64="C/o1WICGdSbr5MoBSnnKjoum6k8=">AAACGXicbVDLTgIxFO3gC/E16lIXjcQEN2QGE3VJdONyTBwgAUI65QINnc6k7ZAQwsbf8Afc6h+4M25d+QN+hx2YhYAnaXJyzr05tyeIOVPacb6t3Nr6xuZWfruws7u3f2AfHtVUlEgKPo14JBsBUcCZAF8zzaERSyBhwKEeDO9Svz4CqVgkHvU4hnZI+oL1GCXaSB371JNRX4JSbATY9xQJTaro45Lnexcdu+iUnRnwKnEzUkQZvI790+pGNAlBaMqJUk3XiXV7QqRmlMO00EoUxIQOSR+ahgoSgmpPZr+Y4nOjdHEvkuYJjWfq340JCZUah4GZDIkeqGUvFf/zmonu3bQnTMSJBkHnQb2EYx3htBLcZRKo5mNDCJXM3IrpgEhCtSluIaWr0tOmBVOMu1zDKqlVyu5V+fKhUqzeZhXl0Qk6QyXkomtURffIQz6i6Am9oFf0Zj1b79aH9TkfzVnZzjFagPX1C2TfoGU=</latexit>

Progressive UPsampling (PUP)

<latexit sha1_base64="kTprED78OXyaleIVdYL5jOmYCHw=">AAACInicbVDLSsNAFJ34tr6iLt0MFkEXlqSCuqy6caGgYFuhhjKZ3qRDJw9mbgol9Av8DX/Arf6BO3EluPY7nNYsrHpg4HDOudw7x0+l0Og479bU9Mzs3PzCYmlpeWV1zV7faOgkUxzqPJGJuvWZBiliqKNACbepAhb5Epp+72zkN/ugtEjiGxyk4EUsjEUgOEMjte2dy0yi2L+APkgaAMNMAT0JQwXhOEF3Ly9O9tp22ak4Y9C/xC1ImRS4atufd52EZxHEyCXTuuU6KXo5Uyi4hGHpLtOQMt5jIbQMjVkE2svH3xnSHaN0aJAo82KkY/XnRM4irQeRb5IRw67+7Y3E/7xWhsGxl4s4zRBi/r0oyCTFhI66oR2hgKMcGMK4EuZWyrtMMY6mwYktHT06bVgyxbi/a/hLGtWKe1g5uK6Wa6dFRQtki2yTXeKSI1Ij5+SK1Akn9+SRPJFn68F6sV6tt+/olFXMbJIJWB9fxw2jrw==</latexit>

Multi-Level feature Aggregation (MLA)

Zheng, Sixiao, et al. "Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers." Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition. 2021.
Questions?
YouTube Playlist

You might also like