There are many Fully Homomorphic Encryption over the Integers schemes whose security is based on the intractability of the Approximate GCD (AGCD) problem.
The paper Algorithms for the Approximate Common Divisor Problem surveys several lattice reduction based algorithms for solving AGCD. All of them require $t > \frac{\gamma - \eta} {\eta - \rho}$ samples to solve the $(\gamma, \eta, \rho)$-AGCD problem by reducing a particular $t \times t$ lattice.
In Section 5.2 of Fully Homomorphic Encryption over the Integers, the authors state a "rule of thumb" that lattice reduction algorithms acting on a $t \times t$ lattice take time roughly $2^{t/k}$ to output a $2^k$ approximation of the shortest vector.
However, LLL lattice reduction is known to run in polynomial time with respect to both the dimension of the lattice and the input sizes. Concretely, for the DGHV scheme where $(\gamma, \eta, \rho)$ ~ $ (\lambda^5, \lambda^2, \lambda)$, a quick estimation yields that $L^2$ lattice reduction can break DGHV in time $\lambda^{26}$$\lambda^{25}$ with memory $\lambda^{11}$ given $\lambda^{3}$ samples.
So why does the aforementioned paper treat the runtime of lattice attacks on AGCD as exponential?