IEICE TRANS. FUNDAMENTALS, VOL.E95–A, NO.5 MAY 2012
903
PAPER
On the Hardness of Subset Sum Problem from Different Intervals
Jun KOGURE†a) , Noboru KUNIHIRO†† , Members, and Hirosuke YAMAMOTO†† , Fellow
SUMMARY
The subset sum problem, which is often called as the
knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest
vector problem of lattice, the low-density attack algorithm by Lagarias and
Odlyzko and its variants solve the subset sum problem efficiently, when the
“density” of the given problem is smaller than some threshold. When we
define the density in the context of knapsack-type cryptosystems, weights
are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this
assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of
above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security
analysis when we reduce the data size of public keys.
key words: subset sum problem, knapsack problem, low-density attack,
lattice reduction
1.
Introduction
When a set of positive integers (weights) S
=
{a1 , . . . , an } (ai a j ) and a positive integer s are given,
find
ing a vector e = (e1 , . . . , en ) ∈ {0, 1}n satisfying ni=1 ai ei =
s, is called the subset sum problem (or the knapsack problem), and is known as an NP-hard problem in general (see,
e.g., [4]). Lagarias-Odlyzko [8] and Brickell [1] independently found an algorithm (LO algorithm, hereafter) that
solves subset sum problems, using lattice reduction algorithm. Both methods almost always solve the problem in
polynomial time if we assume a shortest vector oracle of a
lattice and if the density of the subset sum problem is less
than 0.6463 . . . , where the density d is defined by
d = n/(log2 max ai ).
i
(1)
Coster, Joux, LaMacchia, Odlyzko, Schnorr, and Stern
raised the critical density up to 0.9408 . . . (CJLOSS algorithm, hereafter) [2]. They assumed that all ai ’s are chosen
uniformly at random from an interval (0, A] for some integer
A, and the density was defined as
d = n/(log2 A).
(2)
Since these algorithms are effective against subset sum
problems with relatively low densities, they are sometimes
Manuscript received September 26, 2011.
The author is with Fujitsu Laboratories Ltd., Kawasaki-shi,
211-8588 Japan.
††
The authors are with The University of Tokyo, Kashiwa-shi,
277-8561 Japan.
a) E-mail:
[email protected]
DOI: 10.1587/transfun.E95.A.903
†
called the “low-density attack” in the context of breaking
knapsack-type cryptosystems. However, in general density
cases, the subset sum problem is still hard. In the LO algorithm, the subset sum problem is reduced to the Shortest
Vector Problem (SVP) of a lattice constructed from the given
problem, and one or two SVP oracle calls are admitted. Although no polynomial-time algorithms that solve Shortest
Vector problem are known, the polynomial-time algorithm
by Lenstra, Lenstra & Lovász (LLL algorithm) [7] solves
it with some approximation factor and works relatively better in practice than in theory. One can also use the block
Korkine-Zolotarev(BKZ) algorithm [11] (as in [12]), which
provides better approximation factor but may not work in
polynomial-time, if its block length parameter gets larger.
There have been proposed several public key cryptosystems whose security is based on the hardness of the
subset sum problem. For example, Chor-Rivest proposed a
cryptosystem that can use subset sum problems with relatively high densities [3]. Though the system was attacked
by an algebraic approach [13], the attack may not be valid
in general cases. Okamoto-Tanaka-Uchiyama proposed another cryptosystem OTU, in an attempt to resist adversaries
that can run quantum computers [10].
In these cryptosystems the Hamming weight of solutions is bounded by βn for a small constant β ≤ 1/2. In
general cases, we can take β = 1/2. In cases β is relatively
small, Coster et al. [2] give improvements on their CJLOSS
algorithm, which we refer as CJLOSS+ algorithm in this
paper.
Our Motivation and Contributions:
In the context of knapsack-type cryptosystems, public key
ai ’s are often generated by taking the value mod A for some
integer A. Hence it would be reasonable to adapt the following assumption:
Assumption 1. ai ’s are chosen uniformly at random from
the same interval (0, A].
In this case, the density can be defined as Eq. (2), and the
effectiveness of LO algorithm is well analyzed.
On the other hand, in general subset sum problems, this
assumption may not always hold and the effectiveness of LO
algorithm is not well known. In this paper, we focus on general subset sum problems and analyze its hardness, mainly
from theoretical interests. As LO algorithm can be applied
to general subset sum problems and often works efficiently,
analyzing its effectiveness is very important in order to an-
c 2012 The Institute of Electronics, Information and Communication Engineers
Copyright
IEICE TRANS. FUNDAMENTALS, VOL.E95–A, NO.5 MAY 2012
904
alyze the hardness of general subset sum problems. In general cases, we are given ai ’s without knowing from which
interval they are chosen. Given ai ’s, we may adapt the following assumption:
Assumption 2. ai ’s are chosen uniformly at random from
the same interval (0, maxi ai ].
If we take this assumption, we can define the density as
Eq. (1). However, if the bit lengths of ai ’s vary, this assumption is not appropriate because the expected bit length
is around log2 (maxi ai ) − 1. Actually, even if we have same
maximum value of weights, i.e. same density, experiments
show different success probabilities of LO algorithm when
bit lengths of other weights vary. We will see this phenomenon in the following section.
Another possible assumption will be:
Assumption 3. ai is chosen uniformly at random from the
interval (0, 2⌈log2 ai +1⌉ − 1].
As the expected bit length of an integer that is chosen uniformly at random from the interval (0, 2m −1] is around m−1,
this assumption would be reasonable in some sense. In a
nutshell, this assumption means that small weight is chosen
from a small interval and large weight is chosen from a large
interval. Hence, in this paper we analyze the effectiveness
of LO algorithm when we adapt this assumption 3.
In general cases, efficient attacks might be possible by
decomposing the problem, but we focus on solving the problem by LO algorithm, as it can be applied in any case. We
introduce another density dHM under this assumption, and
see its validity as a criterion for the hardness of the subset
sum problem theoretically. We also make experiments of
solving subset sum problem changing the bit lengths of the
weights and make analysis of the effect on the success probability.
Possible application of our work in the context of
knapsack-type cryptosystems is the security analysis of the
system when we reduce the data size of public keys. If we
would like to reduce the total public key size in knapsacktype cryptosystems, we need to have weights with shorter
bit length. In order to assure the security of such systems,
we need to analyze the hardness of general subset sum problems in our setting.
In Sect. 2 we briefly look over the previous works regarding as LO algorithm and its variants using lattice reduction, and consider changing the bit lengths of weights which
motivated our work. In Sect. 3, we assume that weights are
chosen from different intervals respectively, and present theoretical results in asymptotic case. We also look into nonasymptotic case and analyze the success probability of LO
algorithm and its variants through experiments.
2.
Previous Works and Concerns
In this section, we review LO algorithm by Lagarias and Odlyzko, and improvements by Coster et al.
(CJLOSS/CJLOSS+ algorithm).
Then we give our attentions to changing the bit lengths
of weights. We see the effect on the success probability
of CJLOSS+ algorithm, when we change the bit lengths of
weights.
2.1 LO Algorithm and its Variants
First we review the LO algorithm:
I: a1 , ..., an and s
O: (e1 , ..., en) ∈ {0, 1}n s.t. ni=1 ai ei = s
P:
√
N ← ⌊ n⌋
invoke a shortest vector oracle with the following basis:
=
(1, 0, ..., 0, Na1),
b1
b2
=
(0, 1, ..., 0, Na2),
..
.
bn
=
(0, 0, ..., 1, Nan),
bn+1
=
(0, 0, ..., 0, N s);
let (e′1 , ..., e′n, e′n+1 ) be the return value;
n
′
′
if
i=1 ±ai ei = s and ±ei ∈ {0, 1} for all 1 ≤ i ≤ n
′
and en+1 = 0
then output ±(e′1 , ..., e′n) and halt;
else
output “not found”
end
Theorem 1 ([8]). Let A be a positive integer, and let
a1 , . . . , an be random integers with 0 < ai ≤ A for 1 ≤
i ≤ n. Let e = (e1 , . . . , en ) ∈ {0, 1}n be arbitrary, and let
s = ni=1 ei ai . If the density d < d0 = 0.6463 . . . , then
LO algorithm “almost always” solves the subset sum problem defined by a1 , . . . , an and s, assuming a shortest vector
problem oracle.
As we would like to assume that the number of i’s such
that ei = 1 is less than or
equal to n2 , we actually execute the
′
procedure also for s = ( ni=1 ai ) − s.
√
In CJLOSS algorithm, N is replaced by ⌊ 12 n⌋, and
vector bn+1 is replaced by
1 1
1
, , ..., , N s .
2 2
2
n
Checking if i=1 ±ai e′i = s and ±e′i ∈ {0, 1} is replaced by
checking if ni=1 ai (±e′i + 12 ) = s and e′i ∈ { 21 , − 21 }, and the
output is replaced by (±e′1 + 21 , . . . , ±e′n + 12 ). We also have
the following theorem.
Theorem 2 ([2]). Let A be a positive integer, and let
a1 , . . . , an be random integers with 0 < ai ≤ A for 1 ≤
i ≤ n. Let e = (e1 , . . . , en ) ∈ {0, 1}n be arbitrary, and let
s = ni=1 ei ai . If the density d < d1 = 0.9408 . . . , then
CJLOSS algorithm “almost always” solves the subset sum
problem defined by a1 , . . . , an and s, assuming a shortest
vector problem oracle.
KOGURE et al.: ON THE HARDNESS OF SUBSET SUM PROBLEM FROM DIFFERENT INTERVALS
905
Table 1 Success probability of CJLOSS+ algorithm (in case the ratio of
two kinds of bit lengths varies).
No. of 40-bit ai ’s
60
59
55
50
45
40
35
30
25
20
15
10
5
0
No. of 60-bit ai ’s
0
1
5
10
15
20
25
30
35
40
45
50
55
60
Success(%)
60.0
72.3
85.1
88.0
98.0
100.0
100.0
99.9
99.7
100.0
100.0
100.0
99.9
100.0
In some cryptosystems such as the Chor-Rivest cryptosystem, the Hamming weight k of solutions is bounded by
k = βn for a small constant β ≤ 1/2. Coster et al. remarked
further improvement (CJLOSS+ algorithm)
in such cases.
In CJLOSS+ algorithm, N is replaced by ⌊ β(1 − β)n⌋, and
vector bn+1 is replaced by
(β, β, ..., β, N s).
Checking if ni=1 ±ai e′i = s and ±e′i ∈ {0, 1} is replaced by
checking if ni=1 ai (±e′i + β) = s and e′i ∈ {±(1 − β), ±β}, and
the output is replaced by (±e′1 + β, . . . , ±e′n + β).
2.2 Changing Bit Lengths of Weights
In the definition of the density (1), it is determined only
by the maximum value of given weights if the number n
of weights is fixed.
d = n/(log2 max ai ).
i
Even if the maximum value of weights is fixed, changing the
bit lengths of other weights may effect the success probability of LO algorithms and its variants. We see this through
experiments.
First, we take n = 60 and the Hamming weight k of
the solution is 6. We take ai ’s of bit length 60 or 40, change
their ratio, and run CJLOSS+ algorithm 1000 times for each
ratio. When we fix the ratio, we choose different sets of ai ’s
1000 times without changing the ratio. As a lattice reduction
algorithm, we use block Korkine-Zolotarev algorithm with
block length 20. Table 1 shows the success probability of
CJLOSS+ algorithm in percentage. Though the density of
ai ’s are almost 1 except the top row of the table, success
probability varies.
We see another pattern of weights where bit lengths are
uniformly distributed. The numbers in the left column of Table 2 represents the number n of weights. The numbers in
the first row represents the bit length m of ai ’s. “71 − 80”
means that bit lengths are uniformly distributed from 71
to 80, i.e. there are 7 or 8 ai ’s for each bit length. Other
numbers in the table represent the success probability of
Table 2 Success probability of CJLOSS+ algorithm (in case bit lengths
of weights vary).
n = 70
80
m = 71
95.0
36.2
71-80
98.5
51.1
74
98.8
44.6
80
99.9
58.8
CJLOSS+ algorithm in percentage, when we run the algo-
rithm 1000 times, generating n random weights of bit length
m for each time. As a lattice reduction algorithm, we use
block Korkine-Zolotarev algorithm with block length 20.
The column of bit length m = 71 − 80 and the column of
bit length m = 80 in Table 2 has almost the same density
according to the definition (1), as the maximum bit length
m of weights is 80. Even though they have almost the same
density, the success probability gets smaller for m = 71−80.
These phenomena indicate that the definition (1) of the
density may not be fully appropriate.
3.
Analysis in Case Weights are Chosen from Different
Intervals
In previous works, it is assumed that weights are chosen
from a unique interval. In this section we assume that they
are chosen from different intervals respectively, and we describe our theoretical and experimental analysis in that case.
3.1 Theoretical Results in Asymptotic Case
Theorem 3. Let e = (e1 , . . . , en ) (0, . . . , 0) ∈ {0, 1}n be
fixed. Let A1 , . . . , An be positive integers and a1 , . . . , an be
integers chosen uniformly and independently
at random with
0 < ai ≤ Ai for 1 ≤ i ≤ n. Let s = ni=1 ei ai , and let L be a
lattice spanned by the following basis:
b1 = (1, 0, . . . , 0, Na1 ),
b2 = (0, 1, . . . , 0, Na2 ),
..
.
bn = (0, 0, . . . , 1, Nan ),
bn+1 = (0, 0, . . . , 0, N s),
√
where N is a positive integer larger than n. Let δ(u0 ) be
the minimum value of the following function of u ∈ R+ :
δ(u) =
1
u + ln θ(e−u ),
2
θ(z) = 1 + 2
∞
2
zj ,
j=1
and let c0 denote (log2 e)δ(u0 ).
Then the probability P that the shortest vector in L is
not equal to ê = (e1 , . . . , en , 0) is less than
n
1
.
(2n n/2 + 1)2c0 n
A
i
i=1
Note that the critical density d0 = 0.6463 . . . in LO
algorithm case coincides with c10 in above statement [9].
Proof. Let t = ni=1 ai . We may assume that
IEICE TRANS. FUNDAMENTALS, VOL.E95–A, NO.5 MAY 2012
906
n−1
1
t≤s≤
t,
n
n
=
n
Pr[zn = zn−1 = . . . = zi+1 = 0, zi 0]
i=1
because otherwise any ai ≥ t/n may be removed from consideration. The vector ê = (e1 , . . . , en , 0) is contained in L.
We should consider the probability that there exists a vector
x̂ = (x1 , . . . , xn , xn+1 ) satisfying the following conditions:
|| x̂|| ≤ ||ê||,
x̂ ∈ L,
x̂ {0, ±ê},
(3)
where ||x|| represents Euclidean norm of x. Then x̂ satisfies
the condition (3) only when√xn+1 = 0, because otherwise we
have || x̂|| ≥ |xn+1 | ≥ N > n ≥ ||ê|| which contradicts the
condition (3). Hence we have some integer y that satisfies
ys =
n
× Pr[ai = z′ | zn = zn−1 = . . . = zi+1 = 0, zi 0]
n
1
(5)
≤
Pr[zn = zn−1 = . . . = zi+1 = 0, zi 0]
Ai
i=1
≤
n
1
.
Ai
i=1
Corollary 1. Let HM(A1 , . . . , An ) denote the harmonic
mean of A1 , . . . , An , i.e.
xi ai .
HM(A1 , . . . , An ) =
i=1
+···+ A1n
.
n
Then
If for some c > c0 ,
|y| ≤ n n/2
lim
holds, because
n→∞
n
n
|y|s =
xi ai ≤ || x̂|| ai = || x̂||t,
i=1
i=1
and without loss of generality we may assume that ||e|| ≤
n/2. Let x denote x = (x1 , . . . , xn ), and zi = xi − yei . Then
we have
P ≤ #{x ∈ Zn | x ≤ e } · #{y ∈ Z | | y |≤ n n/2}
⎡ n
⎤
⎢⎢⎢
⎥⎥
× Pr ⎢⎢⎣ ai zi = 0⎥⎥⎥⎦ .
(4)
i=1
From Lemma 1 in [9], the first term of (4) is bounded by
2(log2 e)δ(u)n for any u ∈ R+ . From Theorem 1 in [9], there
exists some value u0 ∈ R+ such that δ(u) has its minimum
value at u = u0 . Writing (log2 e)δ(u0 ) as c0 , the first term of
(4) is bounded by 2c0 n . Whenzn = zn−1 = . . . = zi+1 = 0 and
zi 0, let z′ denote z′ = − z1i i−1
j=1 a j z j , then
⎡ n
⎤
⎥⎥⎥
⎢⎢⎢
Pr ⎢⎢⎣⎢ a j z j = 0 | zn = zn−1 = . . . = zi+1 = 0, zi 0⎥⎥⎦⎥
j=1
= Pr[ai = z′ | zn = zn−1 = . . . = zi+1 = 0, zi 0]
Ai
=
Pr[ai = l]
l=1
× Pr[z′ = l | zn = zn−1 = . . . = zi+1 = 0, zi 0]
Ai
1
=
Pr[z′ = l | zn = zn−1 = . . . = zi+1 = 0, zi 0]
Ai l=1
≤
1
1
A1
1
.
Ai
Hence we can estimate the last term of (4) by
⎡ n
⎤
⎢⎢⎢
⎥⎥⎥
Pr ⎢⎢⎢⎣ a j z j = 0⎥⎥⎥⎦
j=1
log2 HM(A1 , . . . , An )
= c,
n
then
P → 0 (n → ∞).
Proof. From Theorem 3, we have
√
n(2n n/2 + 1)2c0 n
= 0.
lim P ≤ lim
n→∞
n→∞ HM(A1 , . . . , An )
Above Corollary 1 indicates that in case we choose ai ’s from
different periods A1 , . . . , An , we may use another indicator
as its density:
dHM =
n
.
log2 HM(A1 , . . . , An )
(6)
If we assume an SVP oracle of lattice, we can asymptotically solve the subset sum problem when dHM is smaller
than the critical density d0 = 0.6463 . . . .
The reason why harmonic mean of Ai ’s appears here is
as follows. Inthe inequality (4) of theorem 3, we bound the
third term Pr ni=1 ai zi = 0 by ni=1 A1i . In theorem 2, the
corresponding term is bounded by An . If we represent
n
1
n
= ′
Ai
A
i=1
for some A′ and replace A in the definition of usual density
(2) with A′ in our case, we are able to prove our statement.
From above equation, A′ coincides with the harmonic mean
of Ai ’s. Hence dHM can be regarded as a natural extension
of usual density d, and if all Ai ’s are the same value A, dHM
coincides with d.
Further, we may combine this density with Kunihiro’s
density [6]
KOGURE et al.: ON THE HARDNESS OF SUBSET SUM PROBLEM FROM DIFFERENT INTERVALS
907
D=
nH( nk )
,
log2 A
3.2 In Non-asymptotic Case
where H is the binary Entropy function H(x) = −x log x −
(1 − x) log(1 − x), and k is the Hamming weight of the solution:
DHM =
nH( nk )
.
log2 HM(A1 , . . . , An )
In the case of CJLOSS algorithm, we similarly have
following results:
Theorem 4. Let e = (e1 , . . . , en ) (0, . . . , 0) ∈ {0, 1}n be
fixed. Let A1 , . . . , An be positive integers and a1 , . . . , an be
integers chosen uniformly and independently
at random with
0 < ai ≤ Ai for 1 ≤ i ≤ n. Let s = ni=1 ei ai , and let L be a
lattice spanned by the following basis:
b1 = (1, 0, . . . , 0, Na1 ),
b2 = (0, 1, . . . , 0, Na2 ),
..
.
bn = (0, 0, . . . , 1, Nan ),
1
1 1
b′n+1 = ( , , . . . , , N s),
2 2
2
√
where N is a positive integer larger than 12 n. Let δ 21 (u1 )
be the minimum value of the following function of u ∈ R+ :
δ 12 (u) =
1
u + ln θ(e−u ),
4
θ(z) = 1 + 2
∞
2
zj ,
j=1
and let c1 denote (log2 e)δ 12 (u1 ).
Then the probability P that the shortest vector in L is
not equal to ê′ = (e1 − 21 , . . . , en − 21 , 0) is less than
n
√
1
(4n n + 1)2c1 n
.
A
i
i=1
Corollary 2. Let HM(A1 , . . . , An ) denote the harmonic
mean of A1 , . . . , An . If for some c > c1 ,
lim
dHM ≈
80
74
in the case weights are uniformly distributed from 71 to 80,
and its success probability is closer to the case where all
80
, than the case all
weights have bit length 74 and dHM ≈ 74
80
weights have bit length 80 and dHM ≈ 80 = 1. This may be
rather an ideal case, and in general non-asymptotic case, we
need minute examination of the inequality (5) in the proof
of Theorem 3. Let Pi denote the probability Pr[zn = zn−1 =
. . . = zi+1 = 0, zi 0]. In the proof, we bounded each
term Pi A1i with A1i in an asymptotic case. However, when we
deal with concrete subset sum problems where n is a fixed
value, we should rather analyze the coefficients Pi minutely.
For example, if the range of the distribution of bit lengths is
wide, the value of a1i for smaller ai will get far greater than
that of larger ai , hence the harmonic mean of ai ’s will lean
to smaller ai and the effect of larger ai in the indicator dHM
might get smaller than its actual effect.
Another factor we have to consider is the approximate
factor of the actual lattice reduction algorithms. However,
as the running time grows exponentially if we use the exact algorithms, considering this effect is a difficult task in
analyzing results of actual experiments.
3.3 Application in the Context of Knapsack Cryptosystems
Note that the critical density d1 = 0.9408 . . . in CJLOSS algorithm case coincides with c11 in above statement (Theorem
3.1 in [2]).
n→∞
In Sect. 2.2, we saw the success probability of CJLOSS+
algorithm when bit lengths of weights are uniformly distributed from 71 to 80. According to definition (1), the density d ≈ 80
80 = 1 in this case, but the success probability is
smaller than the case where all weights have 80 bit length
and the density is almost 1 also. According to our definition
of density (6),
log2 HM(A1 , . . . , An )
= c,
n
then
P → 0 (n → ∞).
In case of CJLOSS+ algorithm, we use the following
function δβ (u) of u ∈ R+ (Theorem 3.1 in [5]):
−u
δβ (u) = β(1 − β)u + ln θ(e ),
θ(z) = 1 + 2
∞
Possible application of our work in the context of knapsacktype cryptosystems is to use it in the security analysis of the
system when we reduce the data size of public keys. If we
would like to reduce the total public key size in knapsacktype cryptosystems, we need to have weights with shorter bit
lengths. For example, if we have 80 public key weights with
80-bit each, total public key size is 6400 bits. If we have 80
weights with bit lengths between 61 to 80, 4 keys for each
bit length, total public key size is 5640 bits, reducing 11.9%
of public key data size. In order to assure the security of the
system, we need to analyze the hardness of general subset
sum problems in our setting.
4.
Concluding Remarks
j2
z .
j=1
If we set β = 21 , this function coincides with δ 12 (u) in theorem 4.
In this paper, we considered the hardness of general subset
sum problems against LO algorithm and its variants, with
an assumption that weights are chosen from different intervals respectively. In asymptotic case, we introduced another
IEICE TRANS. FUNDAMENTALS, VOL.E95–A, NO.5 MAY 2012
908
density that works as an criterion for the success probabilities of LO algorithm and its variants, and obtained some
theoretical results. In non-asymptotic case, we saw the effectiveness and concerns of our new density through concrete experiments. Possible application of our result in the
context of knapsack cryptosystems is the security analysis
when we reduce the data size of public keys.
Our future work will be to get tighter bounds for the
success probability of LO algorithm and its variants, which
will be useful for estimating the hardness of general subset
sum problems more precisely.
References
[1] E.F. Brickell, “Breaking iterated knapsacks,” Advances in Cryptology: Proc. CRYPTO’84, LNCS 196, pp.342–358, Springer-Verlag,
1985.
[2] M.J. Coster, A. Joux, B.A. LaMacchia, A.M. Odlyzko, C.P. Schnorr,
and J. Stern, “Improved low-density subset sum algorithms,” Computational Complexity, vol.2, pp.111–128, 1992.
[3] B. Chor, and R.L. Rivest, “A knapsack-type public key cryptosystem
based on arithmetic in finite fields,” IEEE Trans. Inf. Theory, vol.34,
no.5, pp.901–909, 1988.
[4] M.R. Garey and D.S. Johnson, Computers and Intractability: A
Guide to the Theory of NP-Completeness, W.H. Freeman and Co.,
San Francisco, 1979.
[5] T. Izu, J. Kogure, T. Koshiba, and T. Shimoyama, “Low-density
attack revisited,” Designs, Codes and Cryptography, vol.43, no.1,
pp.47–59, Springer, 2007.
[6] N. Kunihiro, “New Definition of Density on Knapsack Cryptosystems,” Progress in Cryptology: Proc. Africacrypt 2008, LNCS 5023,
pp.156–173, Springer, 2008.
[7] A.K. Lenstra, H.W. Lenstra, Jr., and L. Lovász, “Factoring polynomials with rational coefficients,” Mathematische Annalen, vol.261,
pp.515–534, 1982.
[8] J.C. Lagarias, and A.M. Odlyzko, “Solving low-density subset sum
problems,” J. ACM, vol.32, no.1, pp.229–246, 1985.
[9] J.E. Mazo and A.M. Odlyzko, “Lattice points in high-dimensional
spheres,” Monatsch. Math., vol.110, pp.47–61, 1990.
[10] T. Okamoto, K. Tanaka, and S. Uchiyama, “Quantum public-key
cryptosystems,” Advances in Cryptology: Proc. CRYPTO 2000,
LNCS 1880, pp.147–165, Springer, 2000.
[11] C.P. Schnorr and M. Euchner, “Lattice basis reduction: Improved
practical algorithms and solving subset sum problems,” Mathematical Programming, vol.66, pp.181–199, 1994.
[12] C.P. Schnorr and H.H. Hörner, “Attacking the Chor-Rivest cryptosystem by improved lattice reduction,” Advances in Cryptology:
Proc. EUROCRYPT’95, LNCS 921, pp.1–12, Springer, 1995.
[13] S. Vaudenay, “Cryptanalysis of the Chor — Rivest cryptosystem,” J.
Cryptology, vol.14, no.2, pp.87–100, 2001.
Jun Kogure
received the B.Sc. and M.Sc.
degrees in mathematics and the Ph.D. degree
in complexity science and engineering from the
University of Tokyo in 1985, 1987 and 2012, respectively. He joined Fujitsu Ltd. in 1987 and
moved to Fujitsu Laboratories Ltd. in 1998. He
was a visiting associate professor and a visiting professor of the University of Tokyo in 2005
and 2007, respectively. He has been a non-fulltime lecturer of Chuo University since 2005, and
was a non-full-time lecturer of Waseda University from 2007 to 2010. He received Electrical Science and Engineering
Promotion Award and IPSJ Kiyasu Special Industrial Achievement Award
in 2007. He was a member of Cryptography Research and Evaluation Committees from 2000 to 2007, and has been a secretary of the Committees
since 2008. He is a member of IPSJ and JSSAC. His research interests are
in cryptography and number theoretic algorithms.
Noboru Kunihiro
received his B.E., M.E.
and Ph.D. in mathematical engineering and information physics from the University of Tokyo
in 1994, 1996 and 2001, respectively. He is
an Associate Professor of the University of Tokyo. He was a researcher of NTT Communication Science Laboratories from 1996 to 2002.
He was an associate professor of the University
of Electro-Communications from 2002 to 2008.
His research interest includes cryptography and
information security. He received the SCIS’97
Paper Prize and the Best Paper Award of IEICE in 2010.
Hirosuke Yamamoto
was born in Wakayama, Japan, on November 15, 1952. He received the B.E. degree from Shizuoka University, in 1975 and the M.E. and Ph.D. degrees
from the University of Tokyo, in 1977 and 1980,
respectively, all in electrical engineering. In
1980, he joined Tokushima University. He was
an Associate Professor at Tokushima University,
the University of Electro-Communications, and
the University of Tokyo, from 1983 to 1987,
from 1987 to 1993, and from 1993 to 1999, respectively. Since 1999, he has been a Professor at the University of Tokyo. He was with the School of Engineering and the School of Information Science and Technology from 1993 to 1999 and from 1999 to 2004,
respectively, and is currently with the School of Frontier Sciences in the
University of Tokyo. In 1989 and 1990, he was a Visiting Scholar at the
Information Systems Laboratory, Stanford University. His research interests are in Shannon theory, data compression algorithms, and cryptology.
Dr. Yamamoto is a Fellow of the IEEE. He served as an Associate Editor for
Shannon Theory, IEEE Transactions on Information Theory from 2007 to
2010 and the Editor-in-Chief for the IEICE Transactions on Fundamentals
from 2010 to 2011. He is currently the Junior President of the Engineering
Sciences Society of the IEICE.