Hardness amplification via space-efficient direct products

Venkatesan Guruswami and Valentine Kabanets


We prove a version of the derandomized Direct Product Lemma for deterministic space-bounded algorithms. Suppose a Boolean function g:{0,1}n → {0,1} cannot be computed on more than 1-δ fraction of inputs by any deterministic time T and space S algorithm, where delta≤1/t for some t. Then, for t-step walks w=(v1, ..., vt) in some explicit d-regular expander graph on 2n vertices, the function g'(w):= g(v1) ... g(vt) cannot be computed on more than 1-Ω(tδ) fraction of inputs by any deterministic time T/dt-poly(n) and space S-O(t). As an application, by iterating this construction, we get a deterministic linear-space ``worst-case to constant average-case'' hardness amplification reduction, as well as a family of logspace encodable/decodable error-correcting codes that can correct up to a constant fraction of errors. Logspace encodable/decodable codes (with linear-time encoding and decoding) were previously constructed by Spielman. Our codes have weaker parameters (encoding length is polynomial, rather than linear), but have a conceptually simpler construction. The proof of our Direct Product Lemma is inspired by Dinur's remarkable recent proof of the PCP theorem by gap amplification using expanders.