Sliding Window Polar Codes
Valerio Bioglio, Carlo Condo
arXiv:2004.07767v1 [cs.IT] 16 Apr 2020
Mathematical and Algorithmic Sciences Lab
Huawei Technologies France SASU
Email: tvalerio.bioglio,
[email protected]
Abstract—We propose a novel coupling technique for the
design of polar codes of length N , making them decodable
through a sliding window of size M ă N . This feature
allows to reduce the computational complexity of the decoder,
an important possibility in wireless communication downlink
scenarios. Our approach is based on the design of an ad-hoc
kernel to be inserted in a multi-kernel polar code framework;
this structure enables the sliding window decoding of the code.
Simulation results show that the proposed sliding window polar
codes outperform the independent blocks transmission in the
proposed scenario, at the cost of a negligible decoding overhead.
Fig. 1: Wireless communication downlink scenario.
Index Terms—Polar Codes, window decoding, information
coupling
I. I NTRODUCTION
Polar codes [1] are linear block codes relying on the channel
polarization effect, that allows to create virtual bit-channels of
different reliability. In a polar code of length N and dimension
K, the information is transmitted through the K most reliable
bit-channels, while the remaining bit-channels are “frozen” to
a predetermined value, usually zero. Originally,
„ polar
codes
1 0
were based on the polarization kernel T2 “
, which
1 1
limited code length N to be a power of 2. This limitation
has been overcome thanks to the discovery of polarization
properties of larger kernels, for which kernels providing
the best polarization properties have been identified [2]. In
parallel, multi-kernel polar codes [3] have been proposed
to create polar codes of virtually any length with efficient
implementations and fast decoding [4], [5].
In this work, we focus on the communication between
a transmitter and a receiver having different computational
capabilities, namely when the receiver is less powerful than
the transmitter, e.g. in the wireless communication downlink
scenario depicted in Figure 1. In our model, the transmitter
is able to create polar codewords of length N , while the
receiver can handle only polar codewords of length M ă N .
The straightforward solution for this problem is to divide the
information in S “ N {M blocks and transmit separately each
block on a different codeword of length M . However, independent transmissions increase the block error rate (BLER) of
the system, since transmitted information is recovered only if
all the S codewords are decoded correctly; a single error in
one of the transmissions results in an overall decoding failure.
In this paper, we propose a novel polar code design for
codes of length N which can be decoded using a sliding window framework, i.e. considering M ă N received symbols
per decoding step, and using the result of one step to facilitate
the next. Recent works have already shown the benefits of
inserting memory in the polar code encoding and decoding
process [6], [7]. Moreover, windowed decoding has been
proposed for the decoding of spatially coupled codes [8] for
convolutional LDPC codes [9]; a similar approach has been
proposed in [10] to construct partially information coupled
polar codes. Our approach differs from the aforementioned
techniques, as it is based on the definition of an ad-hoc
kernel enabling the sliding window decoding of a particular
multi-kernel polar code. Accordingly, the frozen set needs
to be designed on the basis of the resulting transformation
matrix. Finally, we show through both theoretical analysis
and simulations that the proposed code design is able to
outperform independent blocks transmission in the proposed
scenario with negligible decoding overhead.
II. BACKGROUND
In this section, we review the basic concepts of polar codes
design and decoding that will be useful for description and
analysis of the proposed sliding window polar codes.
A. Basic channel transform
The basic combination of two independent binary discrete
memoryless channels (IB-DMCs) W0 and W1 defined over
the same alphabets Wi : X Ñ Y, with X “ t0, 1u, is depicted
in Figure 2 and performed by the channel transformation
matrix T2 . The transition probabilities of the split channels
W ´ and W ` are given by
1 ÿ
W0 py0 |u0 ‘ u1 qW1 py1 |u1 q , (1)
W ´ py0 , y1 |u0 q “
2 x PX
1
W ` py0 , y1 , u0 |u1 q “ W0 py0 |u0 ‘ u1 qW1 py1 |u1 q .
(2)
W´
t“3
X0
U0
W0
Y0
α
β
t“2
W`
ℓ
X1
U1
W1
α
Y1
Fig. 2: Basic combination of two IB-DMC channels.
αr
t“0
û0
The one-step transformation defined by T2 can be rewritten
using operators g and f introduced in [11] to degrade and
enhance channel parameters as
W ´ “ W0 g W1 ,
W ` “ W0 f W1 .
ℓ
β
t“1
βr
(3)
(4)
The Bhattacharyya parameter of channel Wi is defined as
ÿa
ZpWi q “
Wi py|0qWi py|1q
(5)
yPY
and can be used to evaluate the channel reliability, since it
provides an upper bound on the probability that an error
occurs under ML decoding [1]. The Bhattacharyya parameter
of the transformed channels can be calculated on the basis of
that of the original channels as
ZpW0 g W1 q ď 1 ´ p1 ´ ZpW0 qq ¨ p1 ´ ZpW1 qq ,
(6)
ZpW0 f W1 q “ ZpW0 qZpW1 q .
(7)
B. Polar codes
Polar codes rely on the polarization acceleration enabled by
the n-fold Kronecker product of the basic channel transformation kernel T2 ; for a polar code length N “ 2n , its channel
transformation matrix is defined as TN “ T2bn . As the code
length goes toward infinity, bit-channels become completely
noisy or completely noiseless, and the fraction of noiseless
bit-channels approaches the channel capacity. In case of finite
code lengths, however, the polarization of bit-channels is incomplete, generating bit-channels that are partially noisy. The
Bhattacharyya parameter of these intermediary bit-channels
can be tracked throughout the polarization stages to estimate
their reliabilities. For an pN, Kq polar code, the indices of
the K most reliable bit-channels are selected to form the
information set I. The input vector u “ ru0 , u1 , . . . , uN ´1 s
is then created by assigning the K message bits to the entries
of u whose indices are listed in I; the remaining entries of
u, forming the frozen set F , are set to zero. The codeword x
is finally calculated as x “ u ¨ TN .
C. Successive-Cancellation Decoding
In [1], the native decoding algorithm for polar codes,
called successive cancellation (SC), was proposed as well.
The decoding process is portrayed in Figure 3 as a depth-fist
binary tree search, with priority given to the left branch. Given
a node at stage t, let us define as α the set of 2t likelihoods
received from its parent at stage t`1, initialized as the channel
û1
û2
û3
û4
û5
û6
û7
Fig. 3: SC decoding tree of an p8, 4q polar code. White and
black nodes are frozen and information bits respectively.
information at the root node. The node computes 2t´1 soft
values composing αl , that is then sent to the left child at stage
2t´1 . From the left child, 2t´1 partial sums composing β l are
received, allowing for the calculation of likelihood vector αr
to be sent to the right child, that returns the β r partial sum
vector. The 2t -element partial sum vector β can be finally
computed and sent to parent node at stage t ` 1. At t “ 0,
the decoded bit ûi are estimated by taking hard decisions on
the basis of received likelihoods.
To improve the error-correction performance of SC at
moderate code lengths, SC list (SCL) decoding has been
proposed in [12]. It relies on L parallel SC decoders working
on different paths. Every time an information bit is estimated,
the paths are doubled, with L decoders considering it a 0,
and L a 1. To each path is associated a metric, such that the
L paths with the largest path metric are discarded, and the
decoding proceeds until last information bit has been decoded.
III. S LIDING W INDOW P OLAR C ODES
Here we describe how to design, encode and decode a polar
code of length N “ 2n and dimension K such that it is
decodable through a sliding window of size M “ 2m .
A. Sliding window kernel
The sliding window kernel WS is the cornerstone of the
proposed construction. It is defined by the full binary lower
triangular matrix of size S, i.e. the square matrix of size S ˆS
having ones on and below the diagonal, and zeros above the
diagonal, as depicted in Figure 4a. The matrix WS imposes
a channel transformation that can be described as a cascade
of basic channel transformations, as represented in Figure 4c.
This structure imposes to decode input bit ui using only two
channel likelihood out of S, namely x̃i and x̃i`1 , the former
being modified by the hard decisions taken on bits 0 ď j ă i.
We will see how to leverage on this property to construct
multi-kernel polar code mixing WS with classical polar kernel
T2bn to enable for a sliding window decoding mechanism.
If the encoded bits are transmitted over channel W, the
virtual channel Wi experienced by input bit ui transmitted
through channel transformation matrix WS undergoes the
transformation
Wi “ W g pW
f W f . . . f Wq,
loooooooooooomoooooooooooon
i`1 times
(8)
S
»
—
—
—
—
WS “—
—
—
–
0
1
1
..
.
1
1
1
(a) Matrix representation.
fi
ZpW0 q
ZpW1 q
ffi
ffi
ffi
ffi
ffiS
ffi
ffi
fl
ZpWS´1 q
û0
û1
..
.
WS
.. ZpWq
.
(b) Code design.
ûS´2
ûS´1
..
.
..
.
x0
x1
xS´2
xS´1
(c) Channel combination.
Fig. 4: Kernel WS description.
for 0 ď i ă S ´ 1, while for the last channel
WS´1 “ loooooooooomoooooooooon
W f W f ... f W .
(9)
S times
The Bhattacharyya parameter of these virtual channels, depicted in Figure 4b, can hence be calculated as
#
1 ´ p1 ´ ZpWqq ¨ p1 ´ ZpWqi q if i ă S,
ZpWi´1 q ď
ZpWqS
if i “ S.
(10)
The Bhattacharyya parameter can be used to evaluate the
bit-channel reliability for various channel models. For binary
erasure channels (BECs), the bit error probabilities can be
directly calculated from them, while under additive white
Gaussian noise (AWGN) channels, density evolution under
Gaussian approximation (DE/GA) technique can be implemented on their basis [13]; the latter algorithm estimates the
likelihood distribution of the polarized channels by tracking
their mean at each stage of the SC decoding tree. Given the
block representation of kernel WS depicted in Figure 4c, the
bit error probability δi of bit ui under BEC transmission can
be calculated as
#
1 ´ p1 ´ δq ¨ p1 ´ δ i q if i ă S ,
δi´1 “
(11)
δS
if i “ S ,
where δ is the bit erasure probability of the original BEC.
For the AWGN channel, the equations tracking the likelihood
mean µi of input bit ui for WS are given by
#
φ´1 p1 ´ p1 ´ φpµqq ¨ p1 ´ φpiµqqq if i ă S,
µi´1 “
Sµ
if i “ S,
(12)
where φp¨q is defined as
φpxq “
$
&1 ´
%
?1
2 πx
1
ş8
´8
tanh
ˆ
z
e
2
˙
2
pz´xq
4x
dz
if x ą 0,
(13)
if x “ 0,
and can be approximated as described in [13].
B. Code design
As for classical polar codes, the design of a sliding window
polar code entails the selection of its transformation matrix T
and frozen set F . Given S “ N {M , the transformation matrix
of the code is defined as T “ WS b TM , where TM “ T2bm
is the transformation matrix of a polar code of length M .
A sliding window polar code can hence be described as a
particular multi-kernel polar code [3] constructed by placing
kernel WS before the transformation matrix of a classical
polar code of length M . Note that the position of WS with
respect to the other kernels is not interchangeable. The Tanner
graph of the resulting transformation matrix T is depicted in
Figure 5.
The frozen set can be designed according to the general
multi-kernel polar code approach [3] using equations described in Section III-A. Given the structure of T , however,
the calculation of channel bit reliabilities can be simplified
as follows. The Tanner graph depicted in Figure 5 shows
that the channel transformation imposed by T can be seen
as the juxtaposition of S polar code transformations of length
M , each one altering a different channel whose reliability
depends on WS . Given the transmission channel W, it is thus
possible to initially calculate the Bhattacharyya parameter of
the virtual channel Ws experienced by the s-th polar code
Ps defined by TM using (10); then, classical polar codes
design equations can be used to evaluate the bit-channel
reliabilities of input bits ups´1qM , ups´1qM`1 , . . . , usM´1 .
With reference to Figure 5, the Battacharyya parameters at
the left of each TM block would be independently calculated
using the inputs at their right, that have already undergone
one further polarization step through WS . Finally, all the bitchannels of input vector u are sorted in order of reliability,
where the indices of the N ´ K least reliable bit-channels
form the frozen set F , and the indices of the remaining K
bit-channels form the information set I of the code.
C. Encoding
The straightforward encoding process for the proposed
sliding window polar codes follows the standard polar code
technique. The K message bits are inserted in the input vector
u according to the information set previously calculated,
namely storing their values in the indices listed in I, while
the remaining bits of u are set to zero. Codeword x can be
calculated as x “ u ¨ T , where T is the transformation matrix
of the code calculated as previously described. Codeword x
is then transmitted through the channel.
However, the particular structure of WS allows for an
alternative encoding algorithm based on previously described
set of S polar codes P1 , . . . , PS of size M . The information
up1q
up2q
u0
u1
uM´1
uM
uM`1
u2M´1
tp1q
..
.
..
.
TM
TM
..
.
..
.
..
.
..
.
..
.
t
WS
WS
..
.
..
.
..
.
..
.
uN ´M`1
uN ´1
..
.
TM
W
W
y0
y1
xM´1
xM
xM`1
W
W
W
yM´1
yM
yM`1
x2M´1
W
y2M´1
y p1q
y p2q
p2q
..
.
..
.
uN ´M
upSq
x0
x1
..
.
..
.
t
WS
xN ´M
W
xN ´M`1 W
..
.
xN ´1
W
..
.
pSq
yN ´M
yN ´M`1
y pSq
yN ´1
Fig. 5: Tanner graph of transformation matrix T of sliding window polar code.
set Is of polar code Ps can be extracted from the global
information set I as the set of entries of I comprised between
ps ´ 1q ¨ M and s ¨ M ´ 1, for s “ 1, . . . , S. Similarly, partial
input vectors us for 1 ď s ď S are extracted from u or can
be created on the basis of the message bits. Each partial input
vector can be encoded independently through polar encoding
of length M , obtaining S partial codewords tp1q , . . . , tpSq .
Finally, codeword x is obtained by backward accumulation
of the partial codewords, starting from the last one, i.e. x “
rtp1q ‘ . . .‘ tpSq , tp2q ‘ . . .‘ tpSq , . . . , tpS´1q ‘ tpSq , tpSq s. This
encoding strategy, depicted in Figure 6, follows the structure
of sliding window kernel WS and permits to maintain the
classical polar encoding structure, however slightly increasing
the encoding latency. In fact, each parallel encoder requires
log2 M steps to encode the M -length polar codes, then S
steps to obtain x, for a total of log2 M ` S steps, i.e. slightly
larger than the log2 N “ log2 M ` log2 S steps required for
the encoding of a standard polar code of length N .
D. Decoding
The decoding of the proposed sliding window polar code
design is performed in S SC decoding steps, each one
employing M likelihoods. Each SC decoding step outputs
M bits, representing the estimated input vector û; each SC
decoding step takes M received symbols as input, conveniently modified by the M estimated input bits decoded at the
previous step. In the following, logarithmic likelihood ratios
(LLRs) are used in SC decoding, as proposed in [14].
The decoding procedure is detailed in Algorithm 1. Let
us suppose the N channel LLRs to be stored in vector y.
This vector is initially split into S sub-vectors of size M as
y “ ry p1q |y p2q | . . . |y pSq s. The LLR buffer l is initialized with
Algorithm 1 Sliding Window Successive Cancellation
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
Load information set I
Load channel LLRs y
Initialize buffer l “ y p1q
for s “ 1 . . . S ´ 1 do
Is “ ti ´ ps ´ 1q ¨ M | i P I, ps ´ 1q ¨ M ď i ă s ¨ M u
ûpsq “ Decodepl ‘ y ps`1q , Is q
xpsq “ ûpsq ¨ TM
psq
l “ p´1qx ¨ l ` y ps`1q
IS “ ti ´ pS ´ 1q ¨ M | i P I, pS ´ 1q ¨ M ď i ă S ¨ M u
State ûpSq “ Decodepl, IS q
return û “ rûp1q , . . . , ûpSq s
the first M entries of y, namely fixing l “ y p1q . At decoding
step s, the decoder performs the SC decoding of the pM, Ks q
polar code Ps defined as follows. The information set Is is
calculated as the set of entries of I comprised between ps´1q¨
M and s ¨ M ´ 1, while the transformation matrix is given by
TM . The estimated input vector ûpsq is obtained through SC
decoding of code Ps , using vector v “ l ‘ y ps`1q as channel
LLRs, where y psq is the sub-vector of y containing its entries
from s ¨ M to ps ` 1q ¨ M ´ 1 as y psq piq “ ypps ´ 1q ¨ M ` iq
and
A‘B
“ 2 tanh´1 ptanhpA{2q ¨ tanhpB{2qq
» sgnpAq ¨ sgnpBq ¨ minpA, Bq.
(14)
(15)
Vector xpsq “ ûpsq ¨ TM is then calculated to be used
as partial sum, namely to update the LLR buffer as l “
psq
p´1qx ¨ l ` y ps`1q . The decoding of last sub input vector
ûpSq constitutes an exception since it is obtained using directly
up1q
up2q
m
pM, K1 q encoder
pM, K2 q encoder
tp2q
ûp1q
pM, KS q encoder
y ps`1q
pM, Ks q SC decoder
buffer l
ûpsq
ûp2q
..
.
..
.
upSq
tp1q
Fig. 7: Decoding scheme of sliding window polar codes.
tpSq
ûpSq
11
FULL
SW-M = 128
IND-M = 128
SW - M = 256
IND - M = 256
10
Fig. 6: Encoding scheme of sliding window polar codes.
9
Eb /N0
8
the content of the LLR buffer l, i.e. imposing v “ l. Finally,
the input vector estimation û is obtained juxtaposing the sub
input vectors as û “ rûp1q , . . . , ûpSq s. This decoding strategy
can be run on-the-fly during the reception of channel signals.
This procedure is depicted in Figure 7. An SC list decoder
[12] for the proposed sliding window polar codes can be easily
implemented on the basis of the described decoding algorithm.
7
6
5
4
3
2
0
100
200
300
400
500
600
700
800
900
1000
K
E. Performance analysis
In this section, we analyze the performance of the proposed
scheme both in terms of latency and BLER.
Under AWGN channels it is possible to exploit the results of the DE/GA algorithm to approximate the BLER
of SC decoding of polar codes. Considering binary phaseshift keying (BPSK) modulation, the mean values of the
γ{10
, where γ represents
channel LLRs are given by 4 K
N 10
the channel Eb {N0 in dB. Under the Gaussian assumption,
the bit error probability Pea
pui q is related to the LLR mean
value µi as Pe pui q “ Qp µi {2q, where Qp.q denotes the
tail probability of the standard Gaussian distribution. The
block error probability under SC decoding can hence be
approximated by
ÿ ˆc µi ˙
PeSC „
.
(16)
Q
2
iPI
Equation (16) can be used directly for sliding window polar
codes designed under DE/GA. An estimate of the BLER under
SC in case of the independent transmission of S uncorrelated
polar codewords with the same length M is given by PeSC „
S
SC
1 ´ p1 ´ pSC
e q , where pe is the expected BLER of a single
codeword transmission, calculated according to (16).
Figure 8 shows the theoretical EB {N0 required by each
code construction to reach the target BLER of 10´3 as a
function of dimension K for different window sizes M .
Independent transmission (IND) can be seen as the state-ofthe-art in the envisaged scenario: the transmitter divides the
K message bits into S “ N {M messages of K 1 “ K{S bits,
that are encoded and transmitted independently using S polar
codes of length M and dimension K 1 . The transmission is
successful if all S blocks are decoded correctly. In case of full
polar code (FULL) transmission, the transmitter ignores the
limitations at the receiver and transmits a codeword obtained
using the full pN, Kq polar code designed according to the
Fig. 8: Minimum SNR to reach target BLER 10´3 for codes
at N “ 1024.
best frozen set, which is decoded smoothly at the receiver.
Finally, in the proposed sliding window decoding (SW) the
transmitter designs and encodes a polar codeword using the
described technique. The receiver uses the proposed sliding
window decoder to decode the received signal. We can see
that the proposed SW technique outperforms IND for all
rates at equal window dimension, with a substantial gain of 1
dB for some of the rates. Moreover, for M “ 256 sliding
window polar codes show a gap of less than 0.5dB from
the optimal FULL. Unfortunately, theoretical bounds on the
BLER performance of polar codes under SCL are unknown, so
it is not possible to perform a similar analysis for list decoders.
Regarding the SC decoding latency, for the first S´1 phases
the sliding window decoder requires 2M time steps to decode
a length-M polar code and update the buffer l, while 2M ´ 2
time steps are sufficient for the last decoding, leading to a total
latency of 2M S ´ 2 “ 2N ´ 2 time steps. This is equal to the
latency of an SC decoder for a non-systematic code of length
N under the same assumption. However, the implementation
complexity of the proposed sliding window decoder is that
of a polar decoder of length M , plus M memory elements
to store buffer l and the logic to update it. We can thus
conservatively identify as a complexity upper bound that of a
decoder for a code of length 2M : consequently, if N ě 2M ,
the proposed decoder always yields a lower implementation
cost than a length-N decoder. A similar analysis can be done
for SCL decoding, obtaining the same outcome.
IV. S IMULATION RESULTS
In this section we present the BLER performance of the
proposed sliding window design and decoding of polar codes,
100
100
10-2
10-2
BLER
10-1
BLER
10-1
FULL - SC
SW - SC
IND - SC
FULL - L = 8
SW - L = 8
IND - L = 8
10-3
10-3
FULL - SC
SW - SC
IND - SC
FULL - L = 8
SW - L = 8
IND - L = 8
10-4
1
1.5
2
10-4
2.5
3
3.5
4
4.5
Eb /N0
Fig. 9: N “ 1024, M “ 128.
1
1.5
2
2.5
3
3.5
4
4.5
Eb /N0
Fig. 10: N “ 8192, M “ 1024.
Fig. 11: BLER comparison of different polar codes for rate K{N “ 1{4.
compared to state-of-the-art independent block transmissions
and optimal full polar code transmission. We study a scenario
where the transmitter sends K bits to the receiver at a
code rate R “ K{N , but the receiver has limited decoding
capabilities and can handle only M ă N bit.
Figure 11 shows the performance of the IND, FULL and
SW strategies for codes of rate R “ K{N “ 1{4. The FULL
case is used as a benchmark of the best possible achievable
BLER. The codes are decoded using either the SC or SCL algorithms with list size L “ 8. Black curves correspond to the
SC bound calculated according to (16), while black markers
depict simulation results obtained through SC decoding. We
can see that the bounds perfectly match the simulations, hence
they can be considered as reliable approximations of the SC
decoding curves for the proposed code lengths. Red curves
with red markers correspond to SCL decoding simulations.
The proposed solution outperforms IND under both SC and
SCL decoding, while approaching the optimal performance
represented by FULL. The entity of the gain provided by SW
with respect to IND is significant, corresponding to 1.5dB.
V. C ONCLUSION
In this work, we presented a novel multi-kernel construction
that allows a sliding window decoding approach, thanks to
which only a fraction of the codeword bits need to be
received before bits can be decoded. This feature is extremely
useful in scenarios where the receiver has lower computational
capabilities than the transmitter, e.g. downlink in wireless
communications. For this reason, we envisage the application
of the proposed technique in future wireless 5G+ networks,
where multiple devices with different computational capabilities need to be served concurrently. The performance of the
proposed construction and decoding outperforms the stateof-the-art solution with little additional complexity, and can
approach the achievable performance with lower complexity
and no cost in terms of latency.
R EFERENCES
[1] E. Arıkan, “Channel polarization: A method for constructing capacityachieving codes for symmetric binary-input memoryless channels,”
IEEE Transactions on Information Theory, vol. 55, no. 7, pp. 3051–
3073, July 2009.
[2] A. Fazeli and A. Vardy, “On the scaling exponent of binary polarization
kernels,” in 2014 52nd Annual Allerton Conference on Communication,
Control, and Computing (Allerton), Sep. 2014, pp. 797–804.
[3] F. Gabry, V. Bioglio, I. Land, and J.-C. Belfiore, “Multi-kernel
construction of polar codes,” in IEEE International Conference on
Communications (ICC), Paris, France, May 2017.
[4] G. Coppolino, C. Condo, G. Masera, and W. J. Gross, “A multi-kernel
multi-code polar decoder architecture,” IEEE Transactions on Circuits
and Systems I: Regular Papers, vol. 65, no. 12, pp. 4413–4422, Dec
2018.
[5] Adam Cavatassi, Thibaud Tonnellier, and Warren J. Gross, “Fast
decoding of multi-kernel polar codes,” CoRR, vol. abs/1902.01922,
2019.
[6] Andrew James Ferris, Christoph Hirche, and David Poulin, “Convolutional polar codes,” CoRR, vol. abs/1704.00715, 2017.
[7] H. Zheng, S. A. Hashemi, B. Chen, Z. Cao, and A. M. J. Koonen, “Interframe polar coding with dynamic frozen bits,” IEEE Communications
Letters, vol. 23, no. 9, pp. 1462–1465, Sep. 2019.
[8] A. R. Iyengar, P. H. Siegel, R. L. Urbanke, and J. K. Wolf, “Windowed
decoding of spatially coupled codes,” IEEE transactions on Information
Theory, vol. 59, no. 4, pp. 2277–2292, 2012.
[9] A. J. Felstrom and K. S. Zigangirov, “Time-varying periodic convolutional codes with low-density parity-check matrix,” IEEE Transactions
on Information Theory, vol. 45, no. 6, pp. 2181–2191, 1999.
[10] X. Wu, L. Yang, Y. Xie, and J. Yuan, “Partially information coupled
polar codes,” IEEE Access, , no. 6, pp. 63689–63702, Sept. 2018.
[11] S. B. Korada, Polar codes for channel and source coding, PhD Thesis,
2009.
[12] I. Tal and A. Vardy, “List decoding of polar codes,” IEEE Transactions
on Information Theory, vol. 61, no. 5, pp. 2213–2226, May 2015.
[13] H. Vangala, E. Viterbo, and Y. Hong, “A comparative study of
polar code constructions for the AWGN channel,” in arXiv preprint
arXiv:1501.02473, 2015.
[14] A. Balatsoukas-Stimming, M. Bastani Parizi, and A. Burg, “LLR-based
successive cancellation list decoding of polar codes,” IEEE Transactions
on Signal Processing, vol. 63, no. 19, pp. 5165–5179, October 2015.