Hardness amplification via space-efficient direct products
Venkatesan Guruswami and Valentine Kabanets
Abstract
We prove a version of the derandomized Direct Product Lemma for deterministic space-bounded algorithms. Suppose a Boolean function g:{0,1}^{n} → {0,1} cannot be computed on more than 1-δ fraction of inputs by any deterministic time T and space S algorithm, where delta≤1/t for some t. Then, for t-step walks w=(v_{1}, ..., v_{t}) in some explicit d-regular expander graph on 2^{n} vertices, the function g'(w):= g(v_{1}) ... g(v_{t}) cannot be computed on more than 1-Ω(tδ) fraction of inputs by any deterministic time T/d^{t}-poly(n) and space S-O(t). As an application, by iterating this construction, we get a deterministic linear-space ``worst-case to constant average-case'' hardness amplification reduction, as well as a family of logspace encodable/decodable error-correcting codes that can correct up to a constant fraction of errors. Logspace encodable/decodable codes (with linear-time encoding and decoding) were previously constructed by Spielman. Our codes have weaker parameters (encoding length is polynomial, rather than linear), but have a conceptually simpler construction. The proof of our Direct Product Lemma is inspired by Dinur's remarkable recent proof of the PCP theorem by gap amplification using expanders.
Versions
- journal version in Computational Complexity 17:475-500, 2008.
- extended abstract in Proceedings of the Seventh Latin American Symposium on Theoretical Informatics (LATIN'06), pages 556-568, 2006.