Skip to main content
Notice removed Draw attention by CommunityBot
Bounty Ended with no winning answer by CommunityBot
edited body
Source Link
robertkin
  • 448
  • 2
  • 11

There are many Fully Homomorphic Encryption over the Integers schemes whose security is based on the intractability of the Approximate GCD (AGCD) problem.

The paper Algorithms for the Approximate Common Divisor Problem surveys several lattice reduction based algorithms for solving AGCD. All of them require $t > \frac{\gamma - \eta} {\eta - \rho}$ samples to solve the $(\gamma, \eta, \rho)$-AGCD problem by reducing a particular $t \times t$ lattice.

In Section 5.2 of Fully Homomorphic Encryption over the Integers, the authors state a "rule of thumb" that lattice reduction algorithms acting on a $t \times t$ lattice take time roughly $2^{t/k}$ to output a $2^k$ approximation of the shortest vector.

However, LLL lattice reduction is known to run in polynomial time with respect to both the dimension of the lattice and the input sizes. Concretely, for the DGHV scheme where $(\gamma, \eta, \rho)$ ~ $ (\lambda^5, \lambda^2, \lambda)$, a quick estimation yields that $L^2$ lattice reduction can break DGHV in time $\lambda^{26}$$\lambda^{25}$ with memory $\lambda^{11}$ given $\lambda^{3}$ samples.

So why does the aforementioned paper treat the runtime of lattice attacks on AGCD as exponential?

There are many Fully Homomorphic Encryption over the Integers schemes whose security is based on the intractability of the Approximate GCD (AGCD) problem.

The paper Algorithms for the Approximate Common Divisor Problem surveys several lattice reduction based algorithms for solving AGCD. All of them require $t > \frac{\gamma - \eta} {\eta - \rho}$ samples to solve the $(\gamma, \eta, \rho)$-AGCD problem by reducing a particular $t \times t$ lattice.

In Section 5.2 of Fully Homomorphic Encryption over the Integers, the authors state a "rule of thumb" that lattice reduction algorithms acting on a $t \times t$ lattice take time roughly $2^{t/k}$ to output a $2^k$ approximation of the shortest vector.

However, LLL lattice reduction is known to run in polynomial time with respect to both the dimension of the lattice and the input sizes. Concretely, for the DGHV scheme where $(\gamma, \eta, \rho)$ ~ $ (\lambda^5, \lambda^2, \lambda)$, a quick estimation yields that $L^2$ lattice reduction can break DGHV in time $\lambda^{26}$ with memory $\lambda^{11}$ given $\lambda^{3}$ samples.

So why does the aforementioned paper treat the runtime of lattice attacks on AGCD as exponential?

There are many Fully Homomorphic Encryption over the Integers schemes whose security is based on the intractability of the Approximate GCD (AGCD) problem.

The paper Algorithms for the Approximate Common Divisor Problem surveys several lattice reduction based algorithms for solving AGCD. All of them require $t > \frac{\gamma - \eta} {\eta - \rho}$ samples to solve the $(\gamma, \eta, \rho)$-AGCD problem by reducing a particular $t \times t$ lattice.

In Section 5.2 of Fully Homomorphic Encryption over the Integers, the authors state a "rule of thumb" that lattice reduction algorithms acting on a $t \times t$ lattice take time roughly $2^{t/k}$ to output a $2^k$ approximation of the shortest vector.

However, LLL lattice reduction is known to run in polynomial time with respect to both the dimension of the lattice and the input sizes. Concretely, for the DGHV scheme where $(\gamma, \eta, \rho)$ ~ $ (\lambda^5, \lambda^2, \lambda)$, a quick estimation yields that $L^2$ lattice reduction can break DGHV in time $\lambda^{25}$ with memory $\lambda^{11}$ given $\lambda^{3}$ samples.

So why does the aforementioned paper treat the runtime of lattice attacks on AGCD as exponential?

Notice added Draw attention by robertkin
Bounty Started worth 50 reputation by robertkin
edited body
Source Link
robertkin
  • 448
  • 2
  • 11

There are many Fully Homomorphic Encryption over the Integers schemes whose security is based on the intractability of the Approximate GCD (AGCD) problem.

The paper Algorithms for the Approximate Common Divisor Problem surveys several lattice reduction based algorithms for solving AGCD. All of them require $t > \frac{\gamma - \eta} {\eta - \rho}$ samples to solve the $(\gamma, \eta, \rho)$-AGCD problem by reducing a particular $t \times t$ lattice.

In Section 5.2 of Fully Homomorphic Encryption over the Integers, the authors state a "rule of thumb" that lattice reduction algorithms acting on a $t \times t$ lattice take time roughly $2^{t/k}$ to output a $2^k$ approximation of the shortest vector.

However, LLL lattice reduction is known to run in polynomial time with respect to both the dimension of the lattice and the input sizes. Concretely, for the DHGVDGHV scheme where $(\gamma, \eta, \rho)$ ~ $ (\lambda^5, \lambda^2, \lambda)$, a quick estimation yields that $L^2$ lattice reduction can break DGHV in time $\lambda^{26}$ with memory $\lambda^{11}$ given $\lambda^{3}$ samples.

So why does the aforementioned paper treat the runtime of lattice attacks on AGCD as exponential?

There are many Fully Homomorphic Encryption over the Integers schemes whose security is based on the intractability of the Approximate GCD (AGCD) problem.

The paper Algorithms for the Approximate Common Divisor Problem surveys several lattice reduction based algorithms for solving AGCD. All of them require $t > \frac{\gamma - \eta} {\eta - \rho}$ samples to solve the $(\gamma, \eta, \rho)$-AGCD problem by reducing a particular $t \times t$ lattice.

In Section 5.2 of Fully Homomorphic Encryption over the Integers, the authors state a "rule of thumb" that lattice reduction algorithms acting on a $t \times t$ lattice take time roughly $2^{t/k}$ to output a $2^k$ approximation of the shortest vector.

However, LLL lattice reduction is known to run in polynomial time with respect to both the dimension of the lattice and the input sizes. Concretely, for the DHGV scheme where $(\gamma, \eta, \rho)$ ~ $ (\lambda^5, \lambda^2, \lambda)$, a quick estimation yields that $L^2$ lattice reduction can break DGHV in time $\lambda^{26}$ with memory $\lambda^{11}$ given $\lambda^{3}$ samples.

So why does the aforementioned paper treat the runtime of lattice attacks on AGCD as exponential?

There are many Fully Homomorphic Encryption over the Integers schemes whose security is based on the intractability of the Approximate GCD (AGCD) problem.

The paper Algorithms for the Approximate Common Divisor Problem surveys several lattice reduction based algorithms for solving AGCD. All of them require $t > \frac{\gamma - \eta} {\eta - \rho}$ samples to solve the $(\gamma, \eta, \rho)$-AGCD problem by reducing a particular $t \times t$ lattice.

In Section 5.2 of Fully Homomorphic Encryption over the Integers, the authors state a "rule of thumb" that lattice reduction algorithms acting on a $t \times t$ lattice take time roughly $2^{t/k}$ to output a $2^k$ approximation of the shortest vector.

However, LLL lattice reduction is known to run in polynomial time with respect to both the dimension of the lattice and the input sizes. Concretely, for the DGHV scheme where $(\gamma, \eta, \rho)$ ~ $ (\lambda^5, \lambda^2, \lambda)$, a quick estimation yields that $L^2$ lattice reduction can break DGHV in time $\lambda^{26}$ with memory $\lambda^{11}$ given $\lambda^{3}$ samples.

So why does the aforementioned paper treat the runtime of lattice attacks on AGCD as exponential?

Tweeted twitter.com/StackCrypto/status/1047230136423256064
Source Link
robertkin
  • 448
  • 2
  • 11

Why is Approximate GCD a hard problem?

There are many Fully Homomorphic Encryption over the Integers schemes whose security is based on the intractability of the Approximate GCD (AGCD) problem.

The paper Algorithms for the Approximate Common Divisor Problem surveys several lattice reduction based algorithms for solving AGCD. All of them require $t > \frac{\gamma - \eta} {\eta - \rho}$ samples to solve the $(\gamma, \eta, \rho)$-AGCD problem by reducing a particular $t \times t$ lattice.

In Section 5.2 of Fully Homomorphic Encryption over the Integers, the authors state a "rule of thumb" that lattice reduction algorithms acting on a $t \times t$ lattice take time roughly $2^{t/k}$ to output a $2^k$ approximation of the shortest vector.

However, LLL lattice reduction is known to run in polynomial time with respect to both the dimension of the lattice and the input sizes. Concretely, for the DHGV scheme where $(\gamma, \eta, \rho)$ ~ $ (\lambda^5, \lambda^2, \lambda)$, a quick estimation yields that $L^2$ lattice reduction can break DGHV in time $\lambda^{26}$ with memory $\lambda^{11}$ given $\lambda^{3}$ samples.

So why does the aforementioned paper treat the runtime of lattice attacks on AGCD as exponential?