Pseudorandomness When the Odds Are Against You

Sergei Artemenko, Russell Impagliazzo, Valentine Kabanets, and Ronen Shaltiel


Abstract

Impagliazzo and Wigderson (1997) showed that if $E=DTIME(2^{O(n)})$ requires size $2^{\Omega(n)}$ circuits, then every time $T$ constant-error randomized algorithm can be simulated deterministically in time $poly(T)$. However, such polynomial slowdown is a deal breaker when $T=2^{\alpha \cdot n}$, for a constant $\alpha>0$, as is the case for some randomized algorithms for NP-complete problems. Paturi and Pudlak (2010) observed that many such algorithms are obtained from randomized time $T$ algorithms, for $T\leq 2^{o(n)}$, with large one-sided error $1-\epsilon$, for $\epsilon=2^{-\alpha \cdot n}$, that are repeated $1/\epsilon$ times to yield a constant-error randomized algorithm running in time $T/\epsilon=2^{(\alpha+o(1)) \cdot n}$.

We show that if E requires size $2^{\Omega(n)}$ nondeterministic circuits, then there is a $poly(n)$-time $\epsilon$-HSG (Hitting-Set Generator) $H\colon\{0,1\}^{O(\log n) + \log(1/\epsilon)} \to \{0,1\}^n$, implying that time $T$ randomized algorithms with one-sided error $1-\epsilon$ can be simulated in deterministic time $poly(T)/\epsilon$. In particular, under this hardness assumption, the fastest known constant-error randomized algorithm for $k$-SAT {for $k\ge 4$} by Paturi, Pudlak, Saks, and Zane (2005) can be made deterministic with essentially the same time bound. This is the first hardness versus randomness tradeoff for algorithms for NP-complete problems. We address the necessity of our assumption by showing that HSGs with very low error imply hardness for nondeterministic circuits with ``few'' nondeterministic bits.

Applebaum et al. (2015) showed that ``black-box techniques'' cannot achieve $poly(n)$-time computable $\epsilon$-PRGs (Pseudo-Random Generators) for $\epsilon=n^{-\omega(1)}$, even if we assume hardness against circuits with oracle access to an arbitrary language in the polynomial time hierarchy. We introduce weaker variants of PRGs with relative error, that do follow under the latter hardness assumption. Specifically, we say that a function $G:\{0,1\}^r \to \{0,1\}^n$ is an $(\epsilon,\delta)$-re-PRG for a circuit $C$ if \[ (1-\epsilon) \cdot Pr{[C(U_n)=1]} - \delta \le Pr{[C(G(U_r)=1]} \le (1+\epsilon) \cdot Pr{[C(U_n)=1]} + \delta. \] We construct $poly(n)$-time computable $(\epsilon,\delta)$-re-PRGs with arbitrary polynomial stretch, $\epsilon=n^{-O(1)}$ and $\delta=2^{-n^{\Omega(1)}}$. We also construct PRGs with relative error that fool non-boolean distinguishers (in the sense introduced by Dubrov and Ishai (2006)).

Our techniques use ideas from [PP10,TV00,AASY15]. Common themes in our proofs are ``composing'' a PRG/HSG with a combinatorial object such as dispersers and extractors, and the use of nondeterministic reductions in the spirit of Feige and Lund (1997).


Versions