18
$\begingroup$

There are many Fully Homomorphic Encryption over the Integers schemes whose security is based on the intractability of the Approximate GCD (AGCD) problem.

The paper Algorithms for the Approximate Common Divisor Problem surveys several lattice reduction based algorithms for solving AGCD. All of them require $t > \frac{\gamma - \eta} {\eta - \rho}$ samples to solve the $(\gamma, \eta, \rho)$-AGCD problem by reducing a particular $t \times t$ lattice.

In Section 5.2 of Fully Homomorphic Encryption over the Integers, the authors state a "rule of thumb" that lattice reduction algorithms acting on a $t \times t$ lattice take time roughly $2^{t/k}$ to output a $2^k$ approximation of the shortest vector.

However, LLL lattice reduction is known to run in polynomial time with respect to both the dimension of the lattice and the input sizes. Concretely, for the DGHV scheme where $(\gamma, \eta, \rho)$ ~ $ (\lambda^5, \lambda^2, \lambda)$, a quick estimation yields that $L^2$ lattice reduction can break DGHV in time $\lambda^{25}$ with memory $\lambda^{11}$ given $\lambda^{3}$ samples.

So why does the aforementioned paper treat the runtime of lattice attacks on AGCD as exponential?

$\endgroup$
6
  • 6
    $\begingroup$ My quick educated guess is that LLL reduction does not give a good enough solution (i.e., short enough vectors) to break DGHV. Remember that LLL only gives an exponential (in the dimension) approximation to the shortest vector. Breaking DGHV would require a much smaller approximation, and thus much higher running time—exponential in $\lambda$, if all the calculations are right. $\endgroup$ Commented Oct 2, 2018 at 21:58
  • $\begingroup$ It gives the right answer iff t > the bound I stated $\endgroup$
    – robertkin
    Commented Oct 2, 2018 at 21:59
  • 2
    $\begingroup$ But here “Reducing a $t$-dimensional lattice” does not mean just running LLL on it; you need a shorter vector than LLL would give, which requires more time to find. Your calculations don’t appear to have accounted for the approximation factor required to break DGHV. $\endgroup$ Commented Oct 2, 2018 at 22:02
  • 1
    $\begingroup$ From page 5, the bound on t is obtained by comparing the gap between the sizes of our target vector and the next independent shortest vector in the lattice. LLL solves the AGCD if the ratio of their norms is greater than $2^{t/2}$, because LLL is guaranteed to find a vector within that bound of the shortest vector. $\endgroup$
    – robertkin
    Commented Oct 2, 2018 at 22:44
  • 1
    $\begingroup$ As far as I can tell your explanation below appears to make sense. It’s a bit puzzling because I had thought that the formulas were claimed to yield (asymptotically) exponential $2^\lambda$ security for parameters that were polynomial in $\lambda$. But LLL has a polynomial running time for such parameters. $\endgroup$ Commented Dec 6, 2018 at 2:02

2 Answers 2

9
$\begingroup$

TL;DR

  1. The AGCD problem does require asymptotic exponential time to be solved.
  2. In general, LLL cannot solve the AGCD problem
  3. The parameters $(\gamma, \eta, \rho) = (\lambda^5, \lambda^2, \lambda)$ proposed in DGHV10 guarantee (asymptotic) security level of $2^{\Omega(\lambda)}$.

Lattice attacks on the AGCD problem

You are supposing that one can solve the AGCD problem using LLL. Of course, if one can do it, then the AGCD problem is solved in polynomial time. However, this is not true in general, thus, an attacker has to use BKZ with big block size, which is then exponential time.

Roughly speaking, in the AGCD problem, there is a "gap" $\Delta := \eta - \rho$ between the secret value $p$, whose bit length is $\eta$, and the noise terms, whose bit length is $\rho$. The lattice attacks end up trying to find vectors "in this gap", that is, vectors that have norm bigger than $2^\rho$ but smaller than $2^\eta$, thus, we always have necessary conditions of the form $$ \alpha t + \frac{\gamma}{t} < \eta - \rho ~~~~~ (\text{Inequality }1)$$ where $2^\alpha$ is the root Hermite factor of the lattice-basis reduction algorithm.

For example, in the paper you linked, GGM16, Inequality (4) on page 5, is essentially the same as Inequality (1) above if you cancel out the term $\sqrt{t+1}$ that appears in both sides and ignore the constants $0.47$ and $\sqrt{1 / (2\pi e)} \approx 0.242$ but don't ignore the exponential approximation factor of LLL.

Intuitively, the smaller this gap is, the harder it is to find a vector "in" this gap.

Typically, when we instantiate the AGCD problem with $\lambda$ bits of security, we set $\rho = \lambda$, so that the attacks on the noise terms take more than $2^\lambda$ steps, and $\eta$ is defined by the application (e.g., it depends on the depth of the circuits to be evaluated homomorphically). Then, we just have to choose $\gamma$ to obtain the desired security level. And we do this by fixing a root Hermite factor and plugging it in Inequality (1), then solving for $\gamma$.

Ruling out LLL

To choose the parameters such that LLL cannot solve the AGCD problem (which requires then that an attacker uses BKZ, which is an exponential time algorithm), we can set the (optimistic) root Hermite factor of LLL $2^\alpha = 1.02$, thus, in Inequality (1), we have $$\log(1.02) t^2 + (\rho - \eta) t + \gamma < 0.$$

But for this Inequality to have a solution, we need its discriminant to be positive, thus, to rule out LLL, we can choose $\gamma$ such that the discriminant is negative, i.e.,

$$(\eta -\rho )^2 - 4 \log(1.02) \gamma < 0 \iff \gamma > \frac{(\eta -\rho )^2}{ 4 \log(1.02) } \approx 13 \cdot (\eta -\rho )^2. $$

Asymptotic security claimed in DGHV10

Plugging $(\gamma, \eta, \rho) = (\lambda^5, \lambda^2, \lambda)$ in Inequality (1), we have $$\alpha t + \lambda^5 / t < \lambda^2 - \lambda \approx \lambda^2,$$ therefore, we must choose $t \approx \lambda^3$ for the term $\lambda^5 / t$ to be smaller than the gap $\eta - \rho$, but then, $\alpha t \approx \lambda^3$ is bigger than the gap. Hence, we have no solution.

$\endgroup$
8
$\begingroup$

The answer is that just because your algorithm is polynomial time doesn't mean it's fast.

The paper Algorithms for the Approximate Common Divisor Problem claims in section 3.1 that a lattice dimension $>800$ "should be large enough to prevent any practical lattice attack".

As a completely non-rigorous yet enlightening demonstration, we can compare the complexity of a brute-force the noise" approach with the lattice-based methods for $(\lambda^5,\lambda^2, \lambda)$-GCD.

The "brute-force the noise" approach involves $\mathcal{\tilde O}(2^{\frac{3}{2} \rho })$ multiplication/remainder operations on numbers with $\approx \gamma$ bits, where $\mathcal{\tilde O}$ is the usual notation hiding poly-logarithmic factors. So let's say it takes time $\propto \gamma \cdot 2^{\frac{3}{2} \rho } \sim\ \lambda^5 \cdot 2^{\frac{3}{2} \lambda } $.

The SDA lattice algorithm using fpLLL (aka $L^2$ lattice reduction) takes time $\mathcal{\tilde O}(\gamma^2 \cdot t^5) \sim \lambda^{25}$, where $t$ is the dimension of the lattice.

Let's compare these two asymptotic results for the choices of parameters in the paper Fully Homomorphic Encryption over the Integers with Shorter Public Keys. (See below for more details.)
At the "small" level, the time estimation for "brute-force the noise" is $6 \cdot 10^{16}$, and for SDA is $4 \cdot 10^{25}$.
At the "large" level, the time estimation for "brute-force the noise" is $7 \cdot 10^{24}$, and for SDA is $2 \cdot 10^{34}$.

So we'd need the coefficient hidden in $\mathcal{\tilde O}$ to be a billion times smaller for SDA than for "brute force" for lattice reduction to be practical -- and we just don't have that.


However, I cannot justify the "rule-of-thumb" asserted in Fully Homomorphic Encryption over the Integers and used in Fully Homomorphic Encryption over the Integers with Shorter Public Keys that LLL takes time exponential in $t$.

Experimentally, the runtime will appear to be exponential until your sizes are large enough that the asymptotic behavior dominates.


Note that the reason for the inadequacy of LLL is not because it only gives an exponential approximation to the shortest vector. As Nyugen notes in Predicting Lattice Reduction:

The most important fact is that asymptotically, all the algorithms known seem to only achieve an exponential approximation factor as predicted by theory, but the exponentiation bases turn out to be extremely close to 1, much closer than what theory is able to prove.

Furthermore, Algorithms for the Approximate Common Divisor Problem examine this limitation and find that the only effect is to require a mild increase in the number of samples (and hence the lattice dimension). This is because increasing the lattice dimension increases the size of the second shortest (independent) vector in the lattice. If the ratio between the sizes of the target vector and second shortest vector is sufficiently small, then LLL will find the target vector.

We can see this concretely for the parameters in the paper Fully Homomorphic Encryption over the Integers with Shorter Public Keys.
At the "small" level, $(\gamma, \eta, \rho) = (.86 \cdot 10^5, 1632, 24)$. Ignoring the exponential approximation factor tells us we need $535$ samples. Accounting for it yields $546$ samples.
At the "large" level, $(\gamma, \eta, \rho) = (19 \cdot 10^6, 2652, 39)$. Ignoring the exponential approximation factor tells us we need $7272$ samples. Accounting for it yields $9042$ samples.

Thus, for heuristic analysis of the lattice-based algorithms applied to AGCD, we can get away with pretending that LLL solves the Shortest Vector Problem.

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.