### Valentine Kabanets: Research Interests

#### Computational Complexity

- Derandomization
- Pseudorandomness
- Circuit Complexity
- Cryptography

Can randomness be eliminated from efficient computation? Is BPP=P? Is NEXP≠BPP? Can we derandomize space-bounded algorithms? Is RL=L?

For many interesting and useful
combinatorial properties, a *randomly* chosen
combinatorial object has the desired property with high
probability. How do we construct such interesting combinatorial
objects efficiently *deterministically*?

Most Boolean functions
require exponential circuit complexity. Can we prove that some
*natural* functions require large circuits? Is NEXP⊆P/poly?
Can we show lower
bounds for restricted classes of circuits? Is NEXP⊆TC^{0}?
Why is proving circuit lower bounds so difficult?

What kind of computational hardness do we need for cryptography? How can we increase security of various cryptographic primitives?

#### Current Projects

- Meta-algorithms versus Circuit lower bounds
- Role of randomness in efficient computation

The main goal of computational complexity is to understand what is efficiently computable. Given a computational problem, there are two obvious directions to try: design an efficient algorithm (upper bound), or prove that no such efficient algorithm exists (lower bound). Traditionally, designing efficient algorithms is the subject of the theory of algorithms, while lower bounds are sought in complexity theory. It turns out, however, that there is a deep connection between the two directions: better algorithms (for a certain class of problems) also yield strong lower bounds (for related problems), and vice versa, strong lower bounds translate into more efficient algorithms. Intuitively, such a connection between upper and lower bounds must exist. Either getting an efficient algorithm or proving a strong lower bound provides an insight into the nature of efficient computation, and hence into both the power of efficient algorithms (upper bounds) and their limitations (lower bounds).

The goal of this research direction is to gain better understanding of this connection, and to exploit it to get new results in both algorithms and lower bounds.

Randomness has become an indispensable tool in many areas of computer science: algorithm design, networking, error-correcting codes, cryptography, etc. While the use of randomness is unavoidable for certain applications (e.g., cryptographic constructions), other applications can be made completely deterministic. One of the main challenges of theoretical computer science is to understand the exact power of randomness in computation. In particular, it is a big open question whether every randomized polynomial-time algorithm has an equivalent deterministic polynomial-time algorithm (i.e., can be efficiently derandomized). Answering this question is important for algorithm design, as fast deterministic algorithms are usually preferable to randomized ones. On the other hand, there is a deep connection between trying to derandomize probabilistic algorithms and trying to prove lower bounds on the computational power of Boolean circuits. The latter problem has been actively researched for several decades with the goal of resolving the "P vs. NP" question. However, no strong circuit lower bounds are known for any explicit Boolean function.

The goal of this research direction is to explore various aspects of the basic question: to what extent is randomness useful in computation?

Back to home page