Leonid Levin is known for several fundamental contributions to Complexity Theory. The most widely known is surely the notion of “universal search problem,” a concept similar to (and developed concurrently with) NP-completeness. Next, we might cite the proof of the existence of optimal “honest” algorithms for search problems (an honest algorithm uses a given verifier to check its result; expositions of this theorem can be found, among else, here and here). This is an important result regarding the tension between speedup and the existence of optimal algorithms, and will surely be discussed here in the future.

Then, there is a third result, which seems to be less widely known, though it definitely ought to: I will call it **The Fundamental Theorem of Complexity Theory**, a name given by Meyer and Winklmann to a theorem they published in 1979, following related work including (Meyer and Fischer, 1972) and (Schnorr and Stumpf, 1975). As with the NP story, similar ideas were being invented by Levin in the USSR at about the same time. Several years later, with Levin relocated to Boston, these lines of research united and the ultimate, tightest, polished form of the theorem was formed, and presented in a short paper by Levin and, thankfully, a longer “complete exposition” by Seiferas and Meyer (here – my thanks to Janos Simon for pointing this paper out to me). Seiferas and Meyer did not name it *The Fundamental Theorem*, perhaps to avoid ambiguity, but I think that it does deserve a name more punchy than “a characterization of realizable space complexities” (the title of the article).

My purpose in this blog post is to give a brief impression of this result and its significance to the study of speedup phenomena. The reader who becomes sufficiently interested can turn to the complete exposition mentioned (another reason to do so is the details I omit, for instance concerning partial functions).

### Why this theorem is really fundamental

Some Definitions:Analgorithmwill mean, henceforth, a Turing machine with a read-only input tape and a worktape whose alphabet is (the machine can sense the end of the tape – so no blank necessary).Programalso means such a machine, but emphasizes that its “code” is written out as a string.Complexitywill mean space complexity as measured on the worktape. For a machine , its space complexity function is denoted by . A function that is the of some is known asspace-constructible.Note that up to a certain translation overhead, results will apply to any Turing-equivalent model and a suitable complexity measure (technically, a Blum measure).

The Seiferas-Meyer paper makes use of a variety of clever programming techniques for space-bounded Turing machines; one example is universal simulation with a constant

additiveoverhead.

I will argue that this theorem formulates an (almost) ultimate answer to the following (vague) question: “what can be said about optimal algorithms and speedup in general, that is, not regarding specific problems of interest?”

Examples of things that can be said are **Blum’s speedup theorem**: there exist problems with arbitrarily large speedup; the **Hierarchy theorem**: there do exist problems that have an optimal algorithm, at various levels of complexity (the type of result they prove is called a *compression theorem*, which unfortunately creates confusion with the well-known tape compression theorem. The appelation *hierarchy theorem* may bring the right kind of theorem to mind).

As shown by Seiferas and Meyer, these results, among others, can all be derived from the Fundamental Theorem, and for our chosen measure of space, the results are particularly tight: their Compression Theorem states that for any space-constructible function , there are algorithms of that complexity that cannot be sped up by more than an *additive* constant. So, even a tiny multiplicative speedup is ruled out for these algorithms – and the algorithm may be assumed to compute a predicate (so, the size of the output is one bit and is not responsible for the complexity).

An important step to this result is the choice of a clever way to express the notion of “a problem’s complexity” (more specifically, the complexity of computing a given function). To the readership of this blog it may be clear that such a description cannot be, as one may naïvely assume, a single function that describes the complexity of a single, best algorithm for the given problem. The good answer is a so-called *complexity set*. This is the set of all functions for machines that solve the given function (Meyer and Fischer introduced a similar concept, *complexity sequence*, which is specifically intended to describe problems with speedup).

How can a complexity set be specified? Since we are talking about a set of *computable* functions here (in fact, *space-constructible*), it can be given as a set of *programs *that compute the functions. This is called a *complexity specification*. The gist of the Theorem is that it tells us which sets of programs *do* represent the complexity of something – moreover, it offers a choice of equivalent characterizations (an always-useful type of result).

Clearly, if a can be computed in space , it can also be computed in a larger space bound; it can also be computed in a space bound smaller by a constant (a nice exercise in TM hacking – note that we have fixed the alphabet). If and are space bounds that suffice, then does too (another Turing-machine programming trick). So, we can assume that a set of programs represent the

closureof the corresponding set of functions under the above rules. We can also allow it to include programs that compute functions which are not space-constructible: they will not be space bounds themselves, but will imply that constructible functions above them are. So, even a single program can represent an infinite family of time bounds: specifically, the bounds lower-bounded (up to a constant) by the given function.

### A statement of the theorem (roughly) and consequences

**Theorem.** Both of the following are ways to specify all existing complexity sets:

- Sets of
*programs*described by predicates (i.e., , where is a decidable predicate). - Singletons.

The last item I find the most surprising. It is also very powerful. For any machine , the fact that is a complexity specification tells us there there is a function (in fact, a predicate) computable in space **if and only if** . Here is our Compression theorem!

The first characterization is important when descriptions by an infinite number of functions are considered. For example, let us prove the following:

**Theorem.** There is a decision problem, solvable in polynomial (in fact, quadratic) space, that has no best (up to constant factors) algorithm.

Proof. Let be a program that computes , written so that is just a hard-coded constant. The idea is for the set of to be recursively enumerable. Note that all these functions are constructible, that is, prospective space bounds.

Then, by the fundamental theorem, there is a decision problem solvable in space if and only if is at least one of these functions (up to an additive constant). If some algorithm for this problem takes at least space, then there is also a solution in , and so forth.

### Limitations and some loose ends

As exciting as I find this theorem, it has its limitations. Not all the speedup-related results seem to follow from it; for instance, the other Levin’s theorem doesn’t (or I couldn’t see how). Also, results like those given here and here about the measure, or topological category, of sets like the functions that have or do not have speedup, do not seem to follow. In fact, Seiferas and Meyer only prove a handful of “classic” results like Blum speedup, the Compression theorem and a Gap theorem. What new questions about complexity sets *can* be asked and answered using these techniques?

Another limitation is that for complexity measures other than space we do not have such tight results. So, for example, for Turing-machine *time* we are stuck with the not-so-tight hierarchy theorems proved by diagonalization, padding etc. (see references in the bibliography). Is this a problem with our proof methods? Or could some surprising speedup phenomenon be lurking there?

[Oct 16, 2012. Fixed error in last theorem]

November 25, 2012 at 9:42 pm |

Could you please elaborate “If some algorithm for this problem takes at least n^{1+1/k} space, then there is also a solution in n^{1+1/(1+k)}, and so forth.”?

November 26, 2012 at 8:43 am |

Better stated: “since every algorithm for the problem has to be lower-bounded by some e_k = n^{1+1/k}, then, given any such algorithm, there is also an asymptotically better one, matching the lower bound e_{k+1} = n^{1+1/(1+k)}”.

November 26, 2012 at 2:05 pm

This is more clear. Thanks.

March 22, 2013 at 6:58 pm |

[…] Are there familiar computational problems with no best algorithm? « A Fundamental Theorem on Optimality and Speedup (guest post by Amir Ben-Amram) […]