Error Correction for Discrete Tomography
Matthew Ceko1 , Lajos Hajdu2 , and Rob Tijdeman3
1
School of Physics and Astronomy, Monash University, Melbourne, Australia
Institute of Mathematics, University of Debrecen, H-4002 Debrecen, P.O. Box 400, Hungary
3
Mathematical Institute, Leiden University, Postbus 9512, 2300 RA Leiden, The Netherlands
arXiv:2204.12349v1 [math.CO] 26 Apr 2022
2
Dedicated to the memory of Carla Peri.
Abstract
Discrete tomography focuses on the reconstruction of functions f : A → R from their line
sums in a finite number d of directions, where A is a finite subset of Z2 . Consequently, the
techniques of discrete tomography often find application in areas where only a small number
of projections are available. In 1978 M.B. Katz gave a necessary and sufficient condition
for the uniqueness of the solution. Since then, several reconstruction methods have been
introduced. Recently Pagani and Tijdeman developed a fast method to reconstruct f if it
is uniquely determined. Subsequently Ceko, Pagani and Tijdeman extended the method to
the reconstruction of a function with the same line sums of f in the general case. Up to
here we assumed that the line sums are exact. In this paper we investigate the case where a
small number of line sums are incorrect as may happen when discrete tomography is applied
for data storage or transmission. We show how less than d/2 errors can be corrected and
that this bound is the best possible.
Keywords: discrete tomography, error correction, Vandermonde determinants, polynomial
algorithm
Mathematics Subject Classification: Primary 94A08 Secondary 15A06
1
Introduction
We consider functions f : A → R where A = {(i, j) ∈ Z2 : 0 ≤ i < m, 0 ≤ j < n} for
given positive integers m, n and R an integral domain, e.g. Z, R or C. We assume that f is
unknown, but that its line sums in a positive number d of directions are given. The line sums
are often referred to as X-rays throughout the literature to highlight the link between discrete
tomography and computed tomography scans. This type of discrete tomography problem has
been widely studied, see e.g. [3,4,6,14,18,22,23,27,29,37]. Discrete tomography may be applied
in, for example, crystallography [16, 36], distributed storage [5, 34], watermarking [1, 38], image
compression [31] and erasure coding [9, 32, 35].
It makes an essential difference whether the line sums are exact or not. If they are exact,
then there is at least one function satisfying the line sums, but there may be infinitely many.
In 1978 Katz [30] gave a necessary and sufficient condition for the uniqueness of the solution.
If the solution is not unique, any two solutions differ by a so-called ghost, a nontrivial function
g : A → R for which all the line sums in the d directions vanish. In 2001 Hajdu and Tijdeman [23]
gave an explicit expression for the ghost of minimal size and showed that every ghost is a linear
combination of shifts of this ghost of minimal size. Their result shows that arbitrary function
values can be given to a certain set of points of A and that thereafter the function values of the
other points of A are uniquely determined by the line sums.
1
It turns out that proving the existence, and in case of existence uniqueness, of a solution
can be very hard if the range of the function on A is restricted. In 1991 Fishburn, Lagarias,
Reeds and Shepp [16] gave necessary and sufficient conditions for uniqueness of reconstruction
of functions f : A → {1, 2, . . . , N } for some positive integer N > 1. In 1999 and 2000 Gardner,
Gritzmann and Prangenberg [19, 20] showed under very general conditions that proving the
existence or uniqueness of a function f : A → N from its line sums in d directions is NPcomplete. The crux of the NP-results is that the range is not closed under subtraction. If the
range R is a integral domain so that you can add, subtract and multiply by integers, then Gauss
elimination provides a polynomial time algorithm.
Suppose A is an m by n grid and the line sums of a function f : A → R in d directions are
known. Recently a method was developed to construct a function g : A → R which has the same
line sums as f has in time linear in dmn as to the number of operations such as addition and
multiplication. This development started with four papers of Dulio, Frosini and Pagani [12–15]
in which they proved such results for corner regions of A in case d = 2 or 3. Subsequently
Pagani and Tijdeman [33] proved this for general d. In particular, their approach enables one
to reconstruct f , if f is the only function which satisfies the line sums in the d directions. Finally
Ceko, Pagani and Tijdeman [7] invented an algorithm to construct a function g : A → R in
time linear in dmn such that g has the same line sums as f . By the result of Hajdu and
Tijdeman [23], this yields a parameter representation of all the functions g : A → R which
have the same line sums as f . We think it is unlikely that there exists a general reconstruction
method which requires essentially less than O(dmn) operations, if the solution is unique.
A remaining problem is to find the most likely solution if the line sums contain errors. The
most common cause of inconsistency of line sums is noise. This happens, for example, if the
line sums are approximations of line integrals. Here the assumption is that many line sums may
not be exact, but that for each line sum the difference between measured and actual sum is
small. Many algorithms have been developed to deal with this situation, often approximation
methods which work well in practice but do not guarantee optimality. See for example Parts
2 of the books edited by Herman and Kuba [28, 29] and Batenburg and Sijbers [4]. The new
results by Ceko, Pagani and Tijdeman, [7, 33] make a new treatment possible which guarantees
an optimal approximation: Consider the line sums as linear manifolds in an mn-dimensional
space and compute the point P in that space for which the sum of the squares of the Euclidean
distances of P to these linear manifolds is minimal. Theorem 5.5.1 of [21] provides the standard
tool from linear algebra to compute the vector P such that it has, moreover, minimal Euclidean
length. After having computed P , the method from [7, 33] can be applied to the consistent line
sums corresponding to the linear manifolds through P parallel to the above mentioned linear
manifolds to find the optimal solution in the above sense.
In the present paper we deal with another type of errors, viz. errors which maybe arbitrary
large, but are small in number. For literature in this direction, see e.g. [5, 9, 32, 34]. This type
of error occurs when the range of function values is discrete, in particular, finite. A theoretical
analysis of this type of problems has been given by Alpers and Gritzmann [2]. They showed
that for functions f : A → {0, 1} the Hamming distance between any two solutions with equal
cardinality of the lattice sets is 2(d − 1). They write that the problem of determining how
the individual measurements should be corrected in order to provide consistency of the data
is NP-complete whenever d ≥ 3 but is easy for d ≤ 2. In this paper we consider functions
f : A → R and solutions without the equal cardinality condition and show that the minimal
Hamming distance between two sets of line sums is d. We prove that correction of < d/2 wrong
line sums is possible in polynomial time. Here wrong line sum means that the measured line
sum does not agree with the actual line sum or is unknown. After having corrected the wrong
line sums, we have a consistent set of line sums and the method from [7, 33] can be applied to
find an optimal solution in the above sense.
In this paper we prove Theorem 1 to be stated in the next section, give a pseudocode
2
algorithm and an example, and prove that the complexity is O(d4 mn) operations. In order to
be able to reduce the amount of computation at the cost of a weaker result we refine the input
as follows. We assume that the total number of errors is at most F and that the maximal
number of wrong line sums in any direction is at most G. Here F and G can be freely chosen
such that G ≤ F < d/2. We further show that the bound d/2 is the best possible.
In the proof we use the fact that there is redundancy in the information given by the line
sums. For example, if there are no wrong line sums then the sum of the line sums is equal for
each direction. But there are also more complicated dependencies. A first analysis was made by
Van Dalen [11]. The analysis was pursued by Hajdu and Tijdeman [26]. This study (see Lemma
8) is the basis of the present paper. Besides, some properties of Vandermonde determinants
are derived and used. By the dependencies the values of the wrong line sums do not matter.
If no value for some line sum is known, it can be given any value at the start, for example 0.
Of course such a line sum counts as a wrong line sum. The right values of the originally wrong
line sums are computed from the originally correct line sums.
2
The main result
Let d, m, n be positive integers and A = {(i, j) ∈ Z2 : 0 ≤ i < m, 0 ≤ j < n}. Let D = {dh =
(ah , bh ) : h = 1, 2, . . . , d} be a set of distinct pairs of coprime integers. Let f : A → R be an
unknown function for which the line sums in the direction of (ah , bh ) are defined by
X
Lh,t =
f (i, j)
(1)
(i,j)∈A, bh i−ah j=t
for h = 1, 2, . . . , d and for t ∈ Z. Denote for all h and t by L∗h,t the corresponding measured line
sum. We call line sums with L∗h.t = Lh,t correct line sums and with L∗h.t 6= Lh,t wrong line sums.
Suppose that all the line sums in the directions of D are known and that there are less than
d/2 wrong line sums. In this paper we show how the correct line sums can be reconstructed.
Theorem 1. Let d, m, n be positive integers and A = {(i, j) ∈ Z2 : 0 ≤ i < m, 0 ≤ j < n}. Let
D = {(ah , bh ) : h = 1, 2, . . . , d} be a set of distinct pairs of coprime integers with ah ≥ 0 and
bh = 1 if ah = 0. Let f : A → R be an unknown function such that for h = 1, 2, . . . , d the
line sums Lh,t in the direction of (ah , bh ) are given with in total less than d/2 wrong line sums.
Then the correct line sums can be determined.
It is remarkable that the bound depends only on d and is independent of m, n and the
directions themselves. The restriction on the entries of the directions serves to choose one of
the two directions (a, b) and (−a, −b) which provide the same line sums. In Section 3 we show
that the bound d/2 is the best possible. Sections 4-6 contain auxiliary results. Section 7 gives
the proof of Theorem 1. In Section 8 a pseudocode is provided, which details the steps of the
algorithm to find the correct line sums. An example in Section 9 illustrates the algorithm.
Section 10 provides an analysis of the complexity of the algorithm. In the final section we state
some conclusions.
3
An example which shows that Theorem 1 is best possible
If d is even, make d/2 line sums of lines through (0, 0) one larger. If d is odd, do so with (d−1)/2
line sums of lines through (0, 0). Let f ∗ : A → R be equal to f except that f ∗ (0, 0) = f (0, 0)+ 1.
Then, if d is even, d/2 line sums are wrong with respect to f and d/2 line sums are wrong with
respect to f ∗ . If d is odd, then (d − 1)/2 line sums are wrong with respect to f and (d + 1)/2
with respect to f ∗ . Thus in general the line sums cannot be corrected in more than (d − 1)/2
directions.
3
Example 1. Consider the function f : A → Z with one unknown value indicated by ?.
2
3
5
6
6
?
1
3
5
2
4
1
4
0
2
4
Suppose the horizontal line sum through ? is 7, the vertical line sum through ? is 12 and both
diagonal line sums through ? are 13. Then d = 4, the horizontal and vertical line sums suggest
that the value of ? is 2 whereas the diagonal line sums indicate that it should be 3. Both values
result in 2 = d/2 wrong line sums.
4
Vandermonde equations with variable coefficients
Let r be a positive integer. Let c0 , c1 , . . . , c2r−1 and t1 , t2 , . . . , tr be given real numbers with
t1 , t2 , . . . , tr distinct. It is well known and very useful that systems of linear equations
r
X
tji xi = cj
i=1
for j = 0, 1, . . . , r − 1 in unknowns x1 , x2 , . . . , xr have a unique solution which can be found by
using the Vandermonde matrix. In this section we show how to solve the system of equations
r
X
tji xi = cj
(2)
i=1
for j = 0, 1, . . . , 2r − 1, if both x1 , x2 , . . . , xr and t1 , t2 , . . . , tr are unknowns.
The method is based on the following lemmas.
P
Lemma 2. Let M be the r by r matrix with entries Mi,j = rh=1 ti+j
for i, j = 0, 1, . . . , r − 1.
h
Then
Y
det(M ) =
(tj − ti )2 .
1≤i<j≤r
Proof. Observe that M = V T · V where V is the Vandermonde matrix with Vi,j = tji for i = 1,
2, . . . , r and j = 0, 1, . . . , r − 1. Therefore
2
Y
det(M ) = (det(V ))2 = (tj − ti ) .
i<j
Lemma 3. Let c0 , c1 , . . . , c2r−1 be given real numbers. If t1 , t2 , . . . , tr and x1 , x2 , . . . , xr
satisfy (2), then
cj+r − cj+r−1 B1 + · · · + (−1)r cj Br = 0
(3)
for j = 0, 1, . . . , r − 1 where B1 , B2 , . . . , Br are defined by
(z − t1 )(z − t2 ) . . . (z − tr ) = z r − B1 z r−1 + · · · + (−1)r Br .
Proof. We have
cj+r =
r
X
i=1
tij+r xi
=
r
X
i=1
r
X
xi
(−1)h−1 Bh tj+r−h
i
4
h=1
(4)
=
=
r
r
X
X
tj+r−h
xi
(−1)h−1 Bh
i
i=1
h=1
r
X
(−1)h−1 Bh cj+r−h .
h=1
We are now ready to show how system (2) can be solved if both x1 , x2 , . . . , xr and t1 , t2 ,
. . . , tr are unknowns.
Theorem 4. Let r be a positive integer. Let c0 , c1 , . . . , c2r−1 be given real numbers. If nonzero
real numbers x1 , x2 , . . . , xr and distinct real numbers t1 , t2 , . . . , tr satisfy (2), then B1 , B2 ,
. . . , Br defined by (4) can be determined by solving the linear system (3) for j = 0, 1, . . . , r − 1.
Subsequently t1 , t2 , . . . , tr can be found by computing the zeros of the polynomial
z r − B1 z r−1 + B2 z r−2 + · · · + (−1)r Br .
(5)
If t1 , t2 , . . . , tr are chosen, the values of x1 , x2 , . . . , xr are given by
xi = Q
1
1≤i<j≤r (tj
− ti )
·
1
t1
..
.
1
t2
..
.
···
···
..
.
t1r−1 t2r−1 · · ·
cj
cj+1
..
.
cj+r−1
···
···
..
.
···
1
tr
..
.
trr−1
where the numbers cj , cj+1 , . . . , cj+r−1 are in column i for i = 1, 2, . . . , r.
Proof. First we apply Lemma 3, where, by (2), we have to solve a system of r linear
Pr equations
i+j
∗
∗
r−i
in r unknowns B1 , B2 , . . . , Br with coefficient matrix M with Mi,j = (−1)
h=1 th xh for
i, j = 0, 1, . . . , r − 1. Note that
det(M ∗ ) = x1 x2 · · · xr · det(M )
where M is the matrix from Lemma 2. Since det(M ∗ ) is nonzero by Lemma 2, we can solve the
system of r linear equations and so determine the numbers B1 , B2 , . . . , Br . The zeros of (5)
represent the numbers ti by definition of the B’s. Note that from this equation the numbers
t1 , t2 , . . . , tr cannot be distinguished and that every choice is allowed. The expression for xi
follows from solving system (2) for j = 0, 1, . . . , r − 1 using Cramer’s rule.
We conclude this section with a simple application of the Vandermonde determinant.
P
Lemma 5. Let t1 , t2 , . . . , tr be distinct integers. If ri=1 tji xi = 0 for j = 0, 1, . . . , k − 1,
then k < r or x1 = x2 = · · · = xr = 0.
P
Proof. It follows from ri=1 tji xi = 0 for j = 0, 1, . . . , r − 1 that the ti ’s are not distinct or all
xi ’s are 0.
5
A Vandermonde related determinant
We prove the following result.
Lemma 6. Let a1 , a2 , . . . , a2k , b1 , b2 , . . . , b2k be variables. Set W
= ah bk+H− ak+H bh for
h,H
Qk
h, H = 1, 2, . . . , k. Let M be a k × k matrix with entries Mh,H =
i=1 Wi,k+H /Wh.k+H for
h, H = 1, . . . , k. Then
Y
Y
Wh1 ,h2
det(M ) = ±
Wk+H1 ,k+H2 .
1≤h1 <h2 ≤k
1≤H1 <H2 ≤k
5
Proof. The degree of M equals 2k2 − 2k. Observe that det(M ) is divisible by Wh1 ,h2 and by
Wk+H1 ,k+H2 for all h1 , h2 , H1 , H2 = 1, 2, . . . , k; with h1 6= h2 , H1 6= H2 . The degree of the
product
Y
Y
Wh1 ,h2
Wk+H1 ,k+H2
1≤h1 <h2 ≤k
1≤H1 <H2 ≤k
equals 2k2 − 2k too. Therefore there is a constant c such that
Y
Y
det(M ) = c
Wh1 ,h2
Wk+H1 ,k+H2 .
1≤h1 <h2 ≤k
1≤H1 <H2 ≤k
In order to determine the constant c we consider the smallest lexicographic element in
det(M ). This is
k−1 k−2
k−1
±a1k−1 a2k−2 · · · ak−1 · ak+1
ak+2 · · · a2k−1 · b2 b23 · · · bkk−1 · bk+2 b2k+3 · · · b2k
.
On comparing the exponents it turns out that this term can only be obtained by developing
the main diagonal of M . The first row is the only one containing ak+1 ’s. Since no b1 should
be chosen, the only possibility is to choose −ak+1 b2 , −ak+1 b3 , . . . , −ak+1 bk from the leftmost
element of the first row. The second row is the only one containing ak+2 ’s. Therefore it has to
be chosen k − 2 times and the other has to be a1 b5 . Since b2 should not be chosen anymore, we
choose a1 bk+2 and −ak+2 b3 , −ak+2 b4 , . . . , −ak+2 bk from the element at the second row of the
main diagonal. Continuing in this way it turns out that the only possible choice of the factors
in the expansion of entry Mh,h is the term with factors
a1 bk+h , a2 bk+h , . . . , ah−1 bk+h , −ak+h bh+1 , −ak+h bh+2 , . . . , −ak+h bk ,
for h = 1, 2, . . . , k. Since the coefficient of the resulting product is ±1, we conclude that
|c| = 1.
Example 2. For k = 3 the matrix M is as follows, where the
(a2 b4 − a4 b2 )(a3 b4 − a4 b3 ) (a1 b4 − a4 b1 )(a3 b4 − a4 b3 )
(a2 b5 − a5 b2 )(a3 b5 − a5 b3 ) (a1 b5 − a5 b1 )(a3 b5 − a5 b3 )
(a2 b6 − a6 b2 )(a3 b6 − a6 b3 )
(a1 b6 − a6 b1 )(a3 b6 − a6 b3 )
chosen elements are in boldface.
(a1 b4 − a4 b1 )(a2 b4 − a4 b2 )
(a1 b5 − a5 b1 )(a2 b5 − a5 b2 )
(a1 b6 − a6 b1 )(a2 b6 − a6 b2 )
An alternative version of Lemma 6 reads as follows.
Corollary 7. Let a1 , a2 , . . . , a2k , b1 , b2 , . . . , b2k be reals. Set Wh,H = ah bk+H − ak+H bh .
∗
∗
= 1/Wh,H .
: h = 1, 2, . . . , k; H = 1, 2, . . . , k} be the matrix with entries Mh,H
Let M ∗ = {Mh,H
Then
!−1
k
k Y
Y
Y
Y
∗
Wk+H1 ,k+H2
Wh,k+H
.
Wh1 ,h2
det(M ) = ±
1≤h1 <h2 ≤k
6
1≤H1 <H2 ≤k
h=1 H=1
Relations between line sums
The following result is of fundamental importance in our present study.
Lemma 8. Let A, D, f, Lh,t be as in the Introduction. Let D0 be a subset of {1, 2, . . . , d}. For
h = 1, 2, . . . , k := |D0 | ≥ 2 define Eh,D0 by
Y
Eh,D0 = (−1)h−1
(ai bj − aj bi ).
(6)
i,j∈D0 , i<j, i,j6=h
6
Then
X
Eh,D0
h∈D0
X
tk−2 Lh,t = 0.
t∈Z
Proof. This follows from Lemma 4.1 of [26]. To make the paper self-contained we give the proof
here. Without loss of generality we may assume that D0 = {d1 , d2 , . . . , dk } with di = (ai , bi )
for i = 1, 2, . . . , k. Put as = (as1 , as2 , . . . , ask ) and bs = (bs1 , bs2 , . . . , bsk ) for s = 0, 1, 2, . . . . We
denote the determinant of the m × m matrix with h-th column vector xh = (x1,h , . . . , xm.h ) by
det(x1 , . . . , xm ). Furthermore, we denote the determinant of the matrix which we obtain by
omitting its first column vector and its i-th row vector by det(x2 , . . . , xm )i .
Obviously, for s = 0, 1, . . . , k − 2 we have
det(as bk−2−s , ak−2 , ak−3 b, ak−4 b2 , . . . , bk−2 ) = 0.
By developing the first column we obtain, for s = 0, 1, . . . , k − 2,
k
X
(−1)i−1 asi bik−2−s det(ak−2 , ak−3 b, ak−4 b2 , . . . , bk−2 )i = 0.
i=1
It follows that, for arbitrary integers x and y,
k
X
(−1)i−1 (bi x − ai y)k−2 det(ak−2 , ak−3 b, ak−4 b2 , . . . , bk−2 )i = 0.
i=1
Observe that det(ak−2 , ak−3 b, ak−4 b2 , . . . , bk−2 )i is the Vandermonde determinant Ei . It follows that
k X
X
X
f (x, y)(−1)i−1 (bi x − ai y)k−2 Ei = 0.
i=1 j∈Z x,y∈Z,bi −ai =j
Thus
0=
k X
X
i=1 j∈Z
7
j k−2 Ei
X
f (x, y) =
x,y∈Z,bi −ai =j
k X
X
(−1)i−1 Ei j k−2 ℓ(i, j).
i=1 j∈Z
Error correction of line sums
In this section we prove Theorem 1.
Proof of Theorem 1. We introduce two parameters which may be used to reduce the amount of
computation time: we assume that we want to find the correct line sums if in total there are at
F wrong line sums and these wrong line sums are in at most G directions. Thus G ≤ F < d/2.
We use induction on the number k of minimal wrong line sums in a direction if there are wrong
line sums in that direction.
First we treat the simple cases k = 2. We consider the sums L∗h of the line sums L∗h,t in each
direction dh . Since there are at most G < d/2 P
directions with wrong line sums, the majority
of directions have the same correct value L := t∈Z Lh,t which is the sum of all f -values and
therefore independent of h. We set the, r2 say, directions which have a different sum of line
sums apart. We continue with the set R2 of the other d − r2 directions. Observe that the
directions in R2 may have wrong line sums too, but that in such a direction there are at least
two errors, because the sum of the line sums is correct.
Suppose the following induction hypothesis, which we have proved for k = 2, is true for all
values up to some k ≥ 2 with k < d/2.
7
Hypothesis for k. The number of directions with wrong line sums that we have already
detected equals rk . The remaining directions form a set Rk such that if there is a wrong line
sum in direction dh ∈ Rk , then there are at least k wrong line sums in direction dh and for each
direction dh ∈ Rk ,
X
X
tj Lh,t =
tj L∗h,t
t∈Z
t∈Z
for j = 0, 1, . . . , k − 2.
From these assumptions it follows that the number of directions in Rk with wrong line sums
is at most
(G − rk )/k < (d − 2rk )/2k.
(7)
Hence all directions in Rk have correct line sums if d − 2rk ≤ 2k and if this inequality holds,
the induction hypothesis is true for k + 1. In the sequel we assume
d − 2rk ≥ 2k + 1.
(8)
It follows that |Rk | = d − rk ≥ 2k + 1. By renumbering the directions we may assume
d1 , d2 , . . . , dk ∈ Rk . We apply Lemma 8 to the set DH := {1, 2, . . . , k, H} with H > k and
obtain
k
X
X
X
Eh,DH
tk−1 Lh,t + EH,DH
tk−1 LH,t = 0.
(9)
t∈Z
h=1
t∈Z
For h ∈ {1, 2, . . . , k} we define and compute
c∗H
=
k
X
Eh,DH
h=1
X
tk−1 L∗h,t + EH,DH
t∈Z
X
tk−1 L∗H,t .
(10)
t∈Z
From (9) and (10) we obtain, for all dH ∈ Rk , H > k,
k
X
h=1
Eh,DH
X
tk−1 (L∗h,t − Lh,t ) + EH,DH
X
tk−1 (L∗H,t − LH,t ) = c∗H .
(11)
t∈Z
t∈Z
We distinguish between the following two cases:
A) More than (G − rk )/k directions dH ∈ Rk , H > k satisfy c∗H 6= 0.
B) At most (G − rk )/k directions dH ∈ Rk , H > k satisfy c∗H 6= 0.
Case A) Because of the induction hypothesis the number of H > k for which the direction dH
with H > k contains a wrong line sum does not exceed
rk )/k. Therefore there are at most
P (G −
k−1
(G − rk )/k directions DH ∈ Rk , H > k with EH,DH t∈Z t (L∗H,t − LH,t ) 6= 0. It follows that
P
P
there is at least one direction dH with kh=1 Eh,DH t∈Z tk−1 (L∗h,t − Lh,t) 6= 0. This implies
P
that there is an h ∈ {1, 2, . . . , k} such that t∈Z tk−1 (L∗h,t − Lh,t ) 6= 0.
Case B) At least d − k − rk − (G − rk )/k directions dH ∈ Rk with H > k have no wrong line
sums, hence satisfy L∗H,t = LH,t for all t. We have, by (8),
d
d
1
1
d 1
d 1
d/2 − rk
= −k+
− rk
1−
≥ − ≥ k − 1.
≥ + −1−
d − k − rk −
k
2
2
k
2 2
2k
2 k
Since G < d/2, at least k directions dH ∈ Rk with H > k have no wrong
P line sums, hence satisfy
L∗H,t = LH,t for all t. Let dk+1 , dk+2 , . . . , d2k be directions in Rk with t∈Z tk−1 (L∗H,t −LH,t ) = 0
for all t. Then we have, by (11), for H ∈ {k + 1, k + 2, . . . , 2k},
k
X
h=1
Eh,DH
X
tk−1 (L∗h,t − Lh,t ) = c∗H = 0.
t∈Z
8
(12)
Here we consider Eh,DH as coefficients and
matrix has as typical element
P
t∈Z t
k−1 (L∗ − L )
h,t
h,t
Y
Eh,DH = (−1)h−1
as unknowns. The coefficient
(ai bj − aj bi ).
i,j∈{1,2,...,k,H}, i<j, i,j6=h
We claim that the corresponding determinant is nonzero. Observe that the h-th column has a
nonzero factor
Y
(ai bj − aj bi )
(−1)h−1
i,j∈{1,2,...,k}, i<j, i,j6=h
in common. By dividing it out for h = 1, 2, . . . , k the coefficient Eh,DH reduces to
Y
∗
Eh,D
:=
(ai bH − aH bi ).
H
i∈{1,2,...,k}, i6=h
∗
equals
It follows from Lemma 6 that the determinant of the matrix with typical entry Eh,D
H
±
Y
1≤h1 <h2 ≤k
(ah1 bh2 − ah2 bh1 )
Y
k+1≤H1 <H2 ≤2k
(aH1 bH2 − aH2 bH1 ) .
Since
this expression is nonzero, we see that the system (12) has a unique solution. Thus
P
k−1 (L∗ − L ) = 0 for h = 1, 2, . . . , k.
t
h,t
t∈Z
h,t
∗ 6= 0 for at most (G − r )/k directions
By comparing the cases A and BP
we see that CH
k
k−1
∗
dH ∈ Rk with H > k if and only if t∈Z t (Lh,t − Lh,t ) = 0 for h = 1, 2, . . . , k. Split the
d − rk directions in Rk into subsets of k elements and a remainder subset of < k elements. Then
we have more than (d − rk )/k − 1 k-subsets. Among them at most (G − rk )/k < (d/2 − rk )/k
have a direction with a wrong line sums. Since, by (8),
d − rk
d − 2rk
d
−1−
=
> 0,
k
2k
2k
we see that there is at least one k-subset without wrong line sums. Renumber the directions
such that this k-subset is {d1 , d2 , . . . , dk }. Then it follows from (11) that
X
EH,DH
tk−1 (L∗H,t − LH,t ) = c∗H
t∈Z
for H > k with dH ∈ Rk . We define the set Rk+1 as the set of directions d1 , d2 , . . . , dk together
with the P
directions dH , H > k for which c∗H = 0 and define rk+1 as d − |Rk |. For all dh ∈ Rk+1
we have t∈Z tk−1 (L∗h,t − Lh,t) = 0. By the induction hypothesis this is also true for the lower
powers of t. Therefore we have the system of equations
X
tj (L∗h,t − Lh,t ) = 0
t∈Z
for h = 1, 2, . . . , k; j = 0, 1, . . . , k − 1. Since for fixed h the numbers t are distinct, it follows
from Lemma 5 that L∗h,t − Lh,t = 0 for h = 1, 2, . . . , k, if the number of nonzero terms is at
most k. Thus we may assume that if there is a direction in Rk+1 with at least one wrong line
sum, then it has at least k + 1 wrong line sums. This completes the induction hypothesis.
Pk−1
We continue increasing k until k = G or k ≥ F + h=1
rh − krk . If k = G, then a wrong
line sum in a direction dh ∈ Rk+1 would lead to a total of G + 1 directions with wrong line sums
9
Pk−1
which contradicts the definition of G. If k ≥ F + h=1
rh − krk and there is a wrong line sum
in a direction dh ∈ Rk+1 , then the total number of found wrong line sums would exceed
r1 + 2(r2 − r1 ) + 3(r3 − r2 ) + · · · + k(rk − rk−1 ) + k + 1 = krk −
k−1
X
rh + k + 1 > F
(13)
h=1
contradicting the definition of F . Thus in both cases we have found a k for which Rk does not
contain directions with wrong line sums or the conditions on F and G are invalid. So either in
both cases we have found the complete set of directions with only correct line sums or one of
the assumptions on F and G is not fulfilled. In the latter case one might try a higher value of
F or G.
It remains to show how the errors can be found and corrected for every direction which
contains wrong line sums. Suppose there are sH wrong line sums in the direction of dH . Then
there are at most G− sH other directions with wrong line sums. Therefore we have K directions
with only correct line sums where
K = d − G + sH > d/2 + sH > 2sH .
(14)
Renumber the directions such that d1 , d2 , . . . , dK are directions with only correct line sums and
therefore H > K. We set DH = {1, 2, . . . , K, H} and compute
∗
CH,j
:=
K
X
Eh,DH
X
tj L∗h,t + EH,DH
t∈Z
h=1
X
tj L∗H,t
t∈Z
for j = 0, 1, . . . , K − 1. By (9) and the choice of d1 , d2 , . . . , dK , which implies L∗h,t = Lh,t for
h = 1, 2, . . . , K, we obtain
X
∗
tj (L∗H,t − LH,t ) = CH,j
(15)
EH,DH
t∈Z
for j = 0, 1, . . . , K − 1. In order to be able to apply Theorem 4 to (15) we first compute the
∗
number sH which equals the rank of the matrix with typical element (−1)t CH,t+j
for 0 ≤ t <
K/2, 0 ≤ j < K/2. (Recall that K/2 exceeds sH by (14)). After having computed the rank sH
∗ /E
∗
we apply Theorem 4 with r = sH , cj = CH,j
H,DH for j = 0, 1, . . . , sH −1 and xt = LH,t −LH,t
for t = 1, 2, . . . , sH . The set of equations is of the form (2). The theorem enables us to compute
successively B1 , B2 , . . . , Br , t1 , t2 , . . . , tr indicating the sH lines bH x − aH y = t where wrongline
sums are and x1 , x2 , . . . , xr which are the made errors L∗H,t − LH,t . This should be done for
every direction dH with wrong line sums. This completes the proof of Theorem 1.
8
An Error Correction Algorithm
In this section, we explicitly describe an algorithm for finding directions which contain wrong
line sums, and correcting the erroneous line sums. Since there may be relatively few errors in
practice, we allow the user to specify the maximum number of errors F which have been made
in at most G directions, where G ≤ F < d/2. If F, G are not chosen, set F = G = ⌊(d − 1)/2⌋.
We use ↔ to denote swapped elements. When the line sums of two directions are swapped,
L∗i,t ↔ L∗j,t, it is implicit that this occurs for all t.
The algorithm finds the directions that contain erroneous line sums, next the wrong line
sums themselves, and then uses the correct line sums to repair these errors. We use the
variable g to count the number of detected erroneous directions, and order the directions
D = {D1 , . . . , Dg , Dg+1 , . . . , Dd }, where Di contains errors for i ≤ g. Steps 1-7 of the algorithm find all directions for which the sum of line sums does not match the majority. As there
10
are less than d/2 errors in total, this detects all the directions which exactly one wrong line
sum. Steps 8-27 involve using groups of k directions to find directions which contain at least
k ≥ 2 errors. Once all directions with wrong line sums have been found, Steps 28-40 determine
the wrong line sums and correct them.
In Step 29 we introduce parameter S which is an upper bound for the number of wrong
line sums in direction dH . Obviously S ≤ F − g + 1 and S ≤ F − rg + rH . In Step 31 we use
direction dg+2S−1 . Since
g + 2S − 1 ≤ g + 2F − 2g + 2 − 1 ≤ d − 1 − g + 1 ≤ d
(16)
this value of S is permitted. In Step 33 the exact number s of wrong line sums in direction dH
is determined.
To apply Theorem 4 in Steps 36-39, we can apply the matrix determinant lemma. Let
i−1
Vi,j = (ti−1
and vj
j ) be the Vandermonde matrix, and u, v column vectors where uj = cj − tj
is equal to 1 for element i, and zero elsewhere. Then we can write xi as
r
X
det(V + uv T )
−1
Vi,j
uj .
= 1 + v T V −1 u = 1 +
xi =
det(V )
(17)
j=1
Therefore, we do not need to compute determinants for each i. Instead, the Vandermonde
inverse matrix is computed once.
Algorithm 1 Line sum error correction
Input: A finite set of (primitive) directions D = {(ah , bh ) : h = 1, 2, . . . , d} and line sums L∗h,t
in the directions of D of a function f : A → R such that L∗h,t 6= Lh,t for at most F pairs
(h, t) in at most G directions where G ≤ F < d/2.
Output: Corrected line sums Lh,t.
1:
2:
for h ← 1P
to d do
L∗h,t
L∗h ←
// Find directions with one wrong line sum
t∈Z
g←0
for h ← 1 to d do
5:
if L∗h 6= median(L∗h ) then
6:
g ← g + 1; rg ← 1
7:
Dg ↔ Dh ; L∗g,t ↔ L∗h,t
3:
4:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
k←2
// Find directions with k ≥ 2 wrong line sums
Pk−1
while k ≤ (F + h=1
rh )/k
−
r
and
g
≤
G
do
g j
k
Pk−1
// cf. (13)
maxDirections ← min G − g, (F + h=1 rh )/k − rg
i←g−k+1
repeat
i←i+k
count ← 0
for H ← g + 1, . . . , i − 1, i + k, . . . , d do
D0 ← {i, . . . , i + k − 1, H}
k
P k−1 ∗
P k−1 ∗
P
t Lh+i−1,t + EH,D0
t LH,t
// cf.(10)
Eg+h,D0
cH ←
h=1
18:
19:
20:
21:
22:
t∈Z
t∈Z
if cH 6= 0 then
count ← count + 1
if count > maxDirections then
break
until count ≤ maxDirections
11
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
for H ← g + 1, . . . , i − 1, i + k, . . . , d do
if cH 6= 0 then
g ← g + 1; rg ← rg−1 + k
Dg ↔ DH ; L∗g,t ↔ L∗H,t
k ←k+1
for H ← 1 to g do
// Correct errors in direction dH
S ← min(F − g + 1, F − rg + rH )
for j ← 1 to 2S − 1 do
D0 ← {g + 1, . . . , g + j, H}
j E
P
P j−1 ∗
g+h,D0 P j−1 ∗
t LH,t +
cj ←
t Lg+h,t
// cf. (10)
EH,D0 t∈Z
t∈Z
h=1
c1 −c2
c3
· · · (−1)S−1 cS
c2 −c3
c4
· · · (−1)S−1 cS+1
c5
· · · (−1)S−1 cS+2
s ← rank c3 −c4
..
..
..
..
.
.
.
.
.
.
.
S−1
cS −cS+1 cS+2 · · · (−1)
c2S−1
−1
s−1
B1
cs
−cs−1 · · · (−1) c1
cs+1
B2
cs+1 −cs
· · · (−1)s−1 c2
cs+2
// cf. (3)
.. ← ..
..
..
..
..
.
.
.
.
.
.
Bs
c2s−1 −c2s−2 · · · (−1)s−1 cs
c2s
s
s−1
s−2
t1 , . . . , ts ← roots(z − B1 z
+ B2 z
− · · · + (−1)s Bs )
i−1 s
V ← (tj )i,j=1
W ← V −1
for i ← 1 to s do
s
P
Wi,j cj
LH,ti ← L∗H,ti −
// cf. (4)
// cf. (17)
j=1
40:
F ←F −s
It may be that the domain of f is finite, but not a rectangular grid. Then the above
algorithm can be applied by choosing A as the smallest rectangular grid with sides parallel to
the coordinate axes containing the domain and defining function value 0 for each point of A
which does not belong to the domain of f . In this way the domain of f is extended to A.
Hereafter the given algorithm can be used. Similarly, if some line sum is missing, denote line
sum 0 and count such a line sum as a wrong line sum.
9
An example
To illustrate the pseudocode we give an example. For reason of transparency we assume that
all the line sums except for the wrong ones are 0. Let D = {D1 , D2 , . . . , D10 } with D1 = (1, 0),
D2 = (0, 1), D3 = (1, 1), D4 = (1, −1), D5 = (2, 1), D6 = (2, −1), D7 = (1, 2), D8 = (1, −2),
D9 = (3, 1), D10 = (3, −1), further F = 4, G = 3, and just three wrong line sums L∗3,0 = −3,
L∗3,4 = 3, L∗6,−6 = −2. Thus we have two wrong line sums in the direction (1, 1) and one in
direction (2, −1). We indicate the effects of the steps of the pseudocode.
Steps 1-7: Selection of directions with deviant sum of line sums. Median(L∗h ) = 0, all L∗h ’s are 0
except for L∗6 = −2. We get g := 1, r1 := 1, D1 := (2, −1), D6 := (1, 0), L∗1,−6 := −2, L∗6,−6 := 0.
Steps 8-21: k = 2, first try. This will fail since the test directions, assumed to have correct line sums, are directions D2 and D3 , but D3 contains wrong line sums. See (6) for E.
12
We get k := 2, maxDirections :=1, i := 0, i := 2, count :=0,
H := 4, D0 := {2, 3, 4}, c4 := (−1)(0 × −3 + 4 × 3) = −12, count := 1,
H := 5, D0 := {2, 3, 5}, c5 := (−2)(0 × −3 + 4 × 3) = −24, count := 2, break.
Steps 12-22: k = 2, second try. This will succeed since both test directions, D4 , D5 , have
correct line sums. The double error in D3 will be detected.
i := 4, count := 0, H := 2, D0 := {4, 5, 2}, c2 := 0,
H := 3, D0 := {4, 5, 3}, c3 = 3(0 × −3 + 4 × 3) = 36, count :=1,
H := 6, c6 := 0, H := 7, c7 := 0, H := 8, c8 := 0, H := 9, c9 := 0, H := 10, c10 := 0.
Steps 23-27: Exchange of D2 and D3 .
g := 2, r2 := 3, D2 := (1, 1), D3 := (0, 1), L∗ (2, 0) := −3, L∗ (2, 4) := 3, L∗ (3, 0) := L∗ (3, 4) := 0,
k := 3. Condition Step 9 is no longer satisfied.
Steps 28-40: Correction of line sum for direction (2,
P−1).
Note that as L∗g+h,t = 0 for g + h > 2, we have cj = t∈Z tj−1 L∗H,t .
H := 1, S := 2, j := 1, D0 := {3, 1}, c1 := −2, j := 2, D0 := {3, 4, 1}, c2 := 12,
j := 3, D0 := {3, 4, 5, 1}, c3 := −72,
s := 1, B1 := (−2)−1 × 12 = −6, t1 := −6, V := 1, W := 1, L1,−6 := −2 + 2 = 0, F := 3.
Steps 28-40: Correction of line sums for direction (1, 1).
H := 2, S := 3, c1 := 0, c2 := 12, c3 := 48, c4 := 192, c5 := 768,
A := [0, −12, 48 : 12, −48, 192; 48, −192, 768], s := 2, [B2 , B1 ]T := (0, −12; 12, −48)−1 (48, 192)T =
[0, 4], t1 := 0, t2 := 4, V := (1, 1; 0, 4), W := (1, −1/4; 0, 1/4), L2,0 := −3 + 3 = 0, L2,4 :=
3 − 3 = 0, F := 1.
All the line sums have the correct value 0. The final value of F denotes the difference of the
original upper bound for the number of errors and the actual number of errors.
10
Complexity
In order to compute the complexity of the above algorithm we make some preliminary observations. If there is an h such that ah ≥ m or |bh | ≥ n, then each line sum in direction (ah , bh ) is
the f ∗ -value of exactly one point. Without loss of generality we may then assume that ah = m
or |bh | = n, respectively. Hence, for each h the value of |t| in (1) is at most 2mn and the number
of directions d does not exceed mn. We further use that g ≤ G ≤ F < d/2, h, H ≤ d and k ≤ G.
In our complexity computation we count an addition, subtraction, multiplication, division
and a comparison of two values as one operation. We neglect the size of the terms which can be
quite high because of the factors tj and the unknown line sums. Often an operation will therefore
mean a multi-precision operation. We assume that the numbers tj for 0 ≤ j < 2d − 1, |t| ≤ 2mn
are computed once. This involves O(dmn) operations.
The numbers t1 , t2 , . . . , ts which are computed in Step 35 of the pseudocode are the numbers
t indicating the lines of the wrong line sums in direction H. By checking for each integer ≤ 2mn
with Horner’s method whether it is a zero of the polynomial, Step 35 takes O(dmn) operations.
In our analysis, we follow the steps of the pseudocode given in Section 8.
• Steps 1-2 (by the above remark on the number of line sums) require O(dmn) operations
(additions).
• Steps 3-7 altogether need O(dmn) operations. (By the Floyd-Rivest algorithm the median
can be calculated in linear time, see [17].)
• Steps 9-11, 27 require O(F ) repetitions,
13
• Steps 12-14, 22 mean O(d/k) repetitions,
• Steps 15-17 involve O(dk2 mn) operations. (According to (8) the computation of the E’s
takes O(k2 ) operations.)
• Steps 18-21 take O(G) operations,
• Steps 23-26 involve O(dmn) operations.
By the structure of this block, the complexity of Steps 9-27 is
O9−11,27 O12−14,22 (O15−21 + O23−26 ) = O(d2 mnF G)
where Oi denotes the number of operations in Steps i.
• Step 28, in view of g ≤ G, implies O(G) repetitions,
• Step 29 needs O(G) additions,
• Step 30 implies O(F ) repetitions,
• Steps 31-32, since j ≤ 2F , require O(mnF 3 ) operations,
• Step 33 needs O(F 2 ) operations,
• Step 34, by Algorithm 2.3.2 on p. 58 of [10], altogether takes O(F 3 ) operations,
• Step 35, by Algorithm 2.2.2 on p. 50 of [10], needs O(F 3 ) operations,
• Step 36, by an earlier remark, needs O(dmn) operations,
• Steps 37-39 take O(F 2 ) operations,
• Step 40, by Algorithm 2.2.2 on p. 50 of [10], requires O(F 3 ) operations,
• Steps 41-43 need O(F 2 ) operations.
By the structure of this block, the complexity of Steps 28-43 is given by
O28 (O29−32 + O33−36 + O37−40 + O41−43 ) = O(mnG(F 3 + d)) = O(d2 mnF G)
where Oi denotes the number of operations implied by the corresponding Steps. Thus the
algorithm can be completed in O(d2 mnF G) operations. .
11
Concluding remarks
After Ceko, Pagani and Tijdeman [7] had developed a fast method to reconstruct a noiseless
discrete tomography problem, the next logical step was to determine what is the most likely
set of consistent line sums in case of inconsistency of line sums. If many line sums are almost
correct, we refer to Section 4 of [25]. In the present paper we study the case that only a small
number of line sums is wrong and show how to rectify the wrong line sums. We present an
algorithm which performs the task in O(d4 mn) operations.
An obvious question is whether the method can be extended to dimension three and higher.
This question is hard to answer for (at least) two reasons. Firstly a higher dimensional version of
Lemma 8 is not known. Secondly a three-dimensional version of the algorithm of Ceko, Pagani
and Tijdeman is only known under special conditions, see [8]. In principle that algorithm can be
extended to any dimension, but the application may be quite complicated, because it is much
14
more difficult to construct the convex hull of the union of all ghosts in dimension > 2 than for
dimension 2.
Another question is whether it is possible to correct d/2 or more errors in the line sums.
In general more wrong line sums will be corrected by the algorithm. It follows from inequality
(16) that in direction dH at least d + g − 1 wrong line sums can be corrected. However, the
example in Section 3 shows that it is not always possible to correct the line sums in d/2 or
more directions, since the f -value of one point is uncertain. In this example it is true for each
direction that not all the line sums in that direction can be corrected, although at most one of
them is wrong.
Acknowledgments
The research of L.H. was supported in part by grants 115479, 128088 and 130909 of the Hungarian National Foundation for Scientific Research and by the projects EFOP-3.6.1-16-2016-00022
and EFOP-3.6.2-16-2017-00015, co-financed by the European Union and the European Social
Fund.
References
[1] F. Autrusseau, P. Le Callet, A robust image watermarking technique based on quantization
noise visibility thresholds, Signal Processing 87 (2007), 1363-1383.
[2] A. Alpers, P. Gritzmann, On stability, error correction, and noise compensation in discrete
tomography, SIAM J. Discr. Math. 20 (2006), 227-239.
[3] A. Alpers, P. Gritzmann, On the reconstruction of static and dynamic discrete structures,
in: R. Ramlau and O. Scherzer (Eds.), The Radon Transform: The First 100 Years and
Beyond, De Gruyter, 2019.
[4] K.J. Batenburg, J. Sijbers, DART: A practical reconstruction algorithm for discrete tomography, IEEE Transactions on Image Processing 20 (9) (2011), 2542-2553.
[5] M. Blawat, K. Gaedke, I Hütter, X.-M. Chen, B. Turczyk, S. Inverso, B.W. Pruitt, G.M.
Church, Forward error correction for DNA Data storage, Procedia Computer Science 80
(2016), 1011-1022.
[6] M. Ceko, T. Petersen, I. Svalbe, R. Tijdeman, Boundary ghosts for discrete tomography,
J. Math. Imaging Vision 63 (2021), 428-440.
[7] M. Ceko, S.M.C. Pagani, R. Tijdeman, Algorithms for linear time reconstruction by discrete
tomography II, arXiv:2010:07862; to appear in Discr. Appl. Math.
[8] M. Ceko, S.M.C. Pagani, R. Tijdeman, Algorithms for linear time reconstruction by discrete
tomography in three dimensions, arXiv:2010.05868.
[9] S. Chandra, I Svalbe, J. Guédon. A. Kingston, N. Normand, Recovering missing slices of
the Discrete Fourier Transform using ghosts, IEEE Transactions on Image Processing 21
(2021), 4431-4441.
[10] H. Cohen, A Course in Computational Algebraic Number Theory, Springer, 1993.
[11] B.E. van Dalen, Dependencies between line sums, Master’s thesis, Univ. Leiden, 2007.
15
[12] P. Dulio, A. Frosini, S.M.C. Pagani, Uniqueness regions under sets of generic projections
in discrete tomography, Discrete Geometry for Computer Imaginary, LNCS 8668 (2014),
Springer-Verlag, pp. 285-296.
[13] P. Dulio, A. Frosini, S.M.C. Pagani, A geometrical characterization of regions of uniqueness
and applications to discrete tomography, Inverse problems 31 (12) (2015), 125011.
[14] P. Dulio, A. Frosini, S.M.C. Pagani, Geometrical characterization of the uniqueness regions under special sets of three directions in discrete tomography, Discrete Geometry for
Computer Imaginary, LNCS 9647 (2016), Springer-Verlag, pp. 105-116.
[15] P. Dulio, A. Frosini, S.M.C. Pagani, Regions of uniqueness quickly reconstructed by three
directions in discrete tomography, Fundamenta Informaticae 155(4) (2017), 407-423.
[16] P.C. Fishburn, J.C. Lagarias, J.A. Reeds, L.A. Shepp, Sets uniquely determined by projection on axes II. Discrete case, Discrete Math. 91 (2) (1991), 149-159.
[17] R. W. Floyd, R. L. Rivest, Algorithm 489: The Algorithm SELECT—for Finding the ith
Smallest of n elements, Comm. ACM. 18 (1975), 173.
[18] R.J. Gardner, P. Gritzmann, Discrete tomography: determination of finite sets by X-rays,
Trans. Amer. Math. Soc. 349 (6), 2271-2295.
[19] R.J. Gardner, P. Gritzmann, D. Prangenberg, On the computational complexity of reconstructing lattice sets from their X-rays, Discrete Math. 202 (1-3) (1999), 45-71.
[20] R.J. Gardner, P. Gritzmann, D. Prangenberg, On the computational complexity of determining polyatomic structures by X-rays, Theor. Computer Sc. 233 (2000), 91-106.
[21] G.H. Golub, C.F. Van Loan, Matrix Computations, 4rd ed., JHU Press, 2013.
[22] J.P. Guédon, N. Normand, The mojette transform: The first ten years, Discrete Geometry
for Computer Imaginary, LNCS 3429 (2005), Springer-Verlag, pp. 79-91.
[23] L. Hajdu, R. Tijdeman, Algebraic aspects of discrete tomography, J. reine angew. Math.
534 (2001), 119-128.
[24] L. Hajdu, R. Tijdeman, Algebraic discrete tomography, Ch. 4 of [29], pp. 55-81.
[25] L. Hajdu, R. Tijdeman, Bounds for approximate discrete tomography solutions, SIAM J.
Discrete Math. 27 (2013), 1055-1066.
[26] L. Hajdu, R. Tijdeman, Consistency conditions for discrete tomography, Fundam. Inform.
153 (2017), 1-23.
[27] G.T. Herman, Fundamentals of computerized tomography: Image reconstruction from projection, 2nd edition, Springer, 2009.
[28] G.T. Herman, A. Kuba (editors), Discrete Tomography; Foundations, Algorithms and Applications, Birkhäuser, 1999.
[29] G.T. Herman, A. Kuba (editors), Advances in discrete tomography and its applications,
Birkhäuser, 2007.
[30] M.B. Katz, Questions of uniqueness and resolution in reconstruction from projections,
Lecture Notes in Biomathematics 26 (1978), Springer-Verlag.
[31] A. Kingston, F. Autrusseau, Lossless compression via predictive coding of discrete Radon
projections, Signal Processing: Image Communication 23 (2008), 313-324.
16
[32] N. Normand, I. Svalbe, B. Parrein, A. Kingston, Erasure coding with the Finite Radon
Transform, Wireless Communications & Network Conference, 2010, 1-6.
[33] S.C.M. Pagani, R. Tijdeman, Algorithms for linear time reconstruction by discrete tomography, Discrete Appl. Math. 271 (1999), 152-170.
[34] B. Parrein, N. Normand, J. Guédon, Multimedia forward error correcting codes for wireless
LAN, Ann. Télécommun. 58 (2003), 448-463.
[35] D. Pertin, G. D’Ippolito, N. Normand, B. Parrein, Spatial implementation of erasure coding
by Finite Radon Transform, J. Electronic Imaging 21 (2012), no. 013023, 28 pp.
[36] P.M. Salzberg, R. Figueroa, Tomography on the 3D-torus and crystals, Ch. 19 of [28], pp.
417-434.
[37] A. Stolk, K.J. Batenburg, An algebraic framework for discrete tomography: revealing the
structure of dependencies, SIAM J. Discrete Math. 24 (2010), 1056-1079.
[38] M. Urvoy, D. Goudia, F. Autrusseau, Perceptual DFT watermarking with improved detection and robustness to geometrical distortions, IEEE Transactions on Information Forensics
and Security 9 (2014), 1108-1119.
17