0
$\begingroup$

Suppose we have a predicate $\phi:\{0,1\}^n\to \{0, 1\}$ corresponding to a self-reducible problem and a FPAUS that can approximately sample from a distribution $p$ such that $\|p-u\|_{L_1}\leq \delta$, where $u$ is the uniform distribution of $\phi^{-1}(1)$. Is it possible to get an approximation of the number of solutions $|\phi^{-1}(1)|$ within a relative error of $\varepsilon$ for an arbitrary $\varepsilon>0$ ignoring the time complexity ? Or is the lowest $\varepsilon$ we can hope for bounded by (a function of) $\delta$ ?

$\endgroup$
6
  • $\begingroup$ Ignoring time complexity, why not just enumerate $x\in\{0,1\}^n$, compute $\phi(x)$, and count the number of $x$ such that $\phi(x)=1$? Or is the only access to $\phi$ via the sampler? In the latter case, surely you can have no bound independent of $\delta$, because in the case that $\delta$ is large, $p$ could, say, have all its probability on just one element $x_0$ of $\phi^{-1}(1)$, so all you can learn about $\phi^{-1}(1)$ is that it contains $x_0$. $\endgroup$
    – Neal Young
    Commented Oct 27, 2020 at 19:53
  • $\begingroup$ @NealYoung Thank you for the comment; indeed I forgot those two assumptions. First, we have access to $\phi$ only via the sampler and second, we should assume that the support of $p$ is $\phi^{-1}(1)$, i.e. all points in $\phi^{-1}(1)$ have non-zero mass under $p$. What can one do in such a case ? Are there any studies that deal with reducing the relative error of the approximate counter for a fixed $\delta$ that we cannot influence ? In all the texts I could find, it is assumed that we can choose a desired $\delta$ of the FPAUS (usually a function of $\varepsilon$). $\endgroup$
    – Nocturne
    Commented Oct 27, 2020 at 19:59
  • $\begingroup$ Consider any algorithm $A$. Consider what $A$ on input $(n, S)$ where $S$ is the sampler that can return only one input, say, $\mathbf 0$. Let $T_n$ be the number of samples that $A$ takes on input $(n, S)$. Now, let $S'$ be a sampler that, for each sample, returns $\mathbf 0$ with probability $1-\epsilon$, and otherwise (with probability $\epsilon$) returns an element uniformly at random from $\{0,1\}^n$. As long as $\epsilon^{-1} \gg T_n$, on input $(n, S')$, w.h.p. $A$ returns the same output as it does for $(n, S)$, so must be wrong w.h.p. on one of the two inputs. $\endgroup$
    – Neal Young
    Commented Oct 28, 2020 at 1:29
  • $\begingroup$ @NealYoung If I understand correctly, your counterexample is to construct a distribution $p$ that has almost all its mass ($1-\varepsilon$) concentrated on one element of $\phi^{-1}(1)$ and let the remaining mass of $\varepsilon$ be uniformly distributed on the remaining elements. This would yield a high $\delta$ for the $L_1$ distance between $p$ and the uniform distribution on $\phi^{-1}(1)$. My question is what can we do now that this $L_1$ distance is fixed in order to achieve an arbitrarily low relative error for the approximate counter ignoring the time complexity ? $\endgroup$
    – Nocturne
    Commented Oct 28, 2020 at 12:27
  • 1
    $\begingroup$ I'm not fixing the number of samples, exactly. My point is that any algorithm that is independent of $\delta$ and works for $S$ (where $|\phi^{-1}(1)| = 1$) must terminate on that input (where the sampler always returns the same thing) in some time, say $T_n$. Then given some $S'$ as I describe (for small enough $\epsilon$) it will (w.h.p.) execute the same way. So it either fails for $S$ or w.h.p. fails for that $S'$. So no $\delta$-independent algorithm works on all inputs. Of course, for every $p$ as you describe, some algorithm works: sample enough to see all of $\phi^{-1}(1)$ w.h.p. $\endgroup$
    – Neal Young
    Commented Oct 28, 2020 at 17:16

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Browse other questions tagged or ask your own question.