A Fundamental Theorem on Optimality and Speedup (guest post by Amir Ben-Amram)

by

Leonid Levin is known for several fundamental contributions to Complexity Theory. The most widely known is surely the notion of “universal search problem,” a concept similar to (and developed concurrently with) NP-completeness. Next, we might cite the proof of the existence of optimal “honest” algorithms for search problems (an honest algorithm uses a given verifier to check its result; expositions of this theorem can be found, among else, here and here).  This is an important result regarding the tension between speedup and the existence of optimal algorithms, and will surely be discussed here in the future.

Then, there is a third result, which seems to be less widely known, though it definitely ought to: I will call it The Fundamental Theorem of Complexity Theory, a name given by Meyer and Winklmann to a theorem they published in 1979, following related work including (Meyer and Fischer, 1972) and (Schnorr and Stumpf, 1975). As with the NP story, similar ideas were being invented by Levin in the USSR at about the same time.  Several years later, with Levin relocated to Boston, these lines of research united and the ultimate, tightest, polished form of the theorem was formed, and presented in a short paper by Levin and, thankfully, a longer “complete exposition” by Seiferas and Meyer (here – my thanks to Janos Simon for pointing this paper out to me). Seiferas and Meyer did not name it The Fundamental Theorem, perhaps to avoid ambiguity, but I think that it does deserve a name more punchy than “a characterization of realizable space complexities” (the title of the article).

My purpose in this blog post is to give a brief impression of this result and its significance to the study of speedup phenomena.  The reader who becomes sufficiently interested can turn to the complete exposition mentioned (another reason to do so is the details I omit, for instance concerning partial functions).

Why this theorem is really fundamental

Some Definitions: An algorithm will mean, henceforth, a Turing machine with a read-only input tape and a worktape whose alphabet is \{0,1\} (the machine can sense the end of the tape – so no blank necessary).  Program also means such a machine, but emphasizes that its “code” is written out as a string. Complexity will mean space complexity as measured on the worktape. For a machine e, its space complexity function is denoted by S_e. A function that is the S_e of some e is known as space-constructible.

Note that up to a certain translation overhead, results will apply to any Turing-equivalent model and a suitable complexity measure (technically, a Blum measure).

The Seiferas-Meyer paper makes use of a variety of clever programming techniques for space-bounded Turing machines; one example is universal simulation with a constant additive overhead.

I will argue that this theorem formulates an (almost) ultimate answer to the following (vague) question: “what can be said about optimal algorithms and speedup in general, that is, not regarding specific problems of interest?”

Examples of things that can be said are Blum’s speedup theorem: there exist problems with arbitrarily large speedup; the Hierarchy theorem: there do exist problems that have an optimal algorithm, at various levels of complexity (the type of result they prove is called a compression theorem, which unfortunately creates confusion with the well-known tape compression theorem. The appelation hierarchy theorem may bring the right kind of theorem to mind).

As shown by Seiferas and Meyer, these results, among others, can all be derived from the Fundamental Theorem, and for our chosen measure of space, the results are particularly tight: their Compression Theorem states that for any space-constructible function S, there are algorithms of that complexity that cannot be sped up by more than an additive constant. So, even a tiny (1-\varepsilon) multiplicative speedup is ruled out for these algorithms – and the algorithm may be assumed to compute a predicate (so, the size of the output is one bit and is not responsible for the complexity).

An important step to this result is the choice of a clever way to express the notion of “a problem’s complexity” (more specifically, the complexity of computing a given function). To the readership of this blog it may be clear that such a description cannot be, as one may naïvely assume, a single function that describes the complexity of a single, best algorithm for the given problem. The good answer is a so-called complexity set. This is the set of all functions S_e for machines e that solve the given function (Meyer and Fischer introduced a similar concept, complexity sequence, which is specifically intended to describe problems with speedup).

How can a complexity set be specified? Since we are talking about a set of computable functions here (in fact, space-constructible), it can be given as a set \cal E of programs that compute the functions. This is called a complexity specification. The gist of the Theorem is that it tells us which sets of programs do represent the complexity of something – moreover, it offers a choice of equivalent characterizations (an always-useful type of result).

Clearly, if a f can be computed in space S, it can also be computed in a larger space bound; it can also be computed in a space bound smaller by a constant (a nice exercise in TM hacking – note that we have fixed the alphabet). If S_1 and S_2 are space bounds that suffice, then \min(S_1,S_2) does too (another Turing-machine programming trick). So, we can assume that a set of programs \cal E represent the closure of the corresponding set of functions under the above rules. We can also allow it to include programs that compute functions which are not space-constructible: they will not be space bounds themselves, but will imply that constructible functions above them are. So, even a single program can represent an infinite family of time bounds: specifically, the bounds S_e lower-bounded (up to a constant) by the given function.

A statement of the theorem (roughly) and consequences

Theorem. Both of the following are ways to specify all existing complexity sets:

  • Sets \cal E of programs described by \Sigma_2 predicates (i.e., {\cal E} = \{ e \mid \exists a \forall b \, P(a,b,e)\}, where P is a decidable predicate).
  • Singletons.

The last item I find the most surprising. It is also very powerful. For any machine e, the fact that \{e\} is a complexity specification tells us there there is a function (in fact, a predicate) computable in space S if and only if S \ge S_e - O(1). Here is our Compression theorem!

The first characterization is important when descriptions by an infinite number of functions are considered. For example, let us prove the following:

Theorem. There is a decision problem, solvable in polynomial (in fact, quadratic) space, that has no best (up to constant factors) algorithm.

Proof. Let e_k be a program that computes \lceil n^{1+\frac{1}{k}} \rceil, written so that k is just a hard-coded constant. The idea is for the set of e_k to be recursively enumerable. Note that all these functions are constructible, that is, prospective space bounds.

Then, by the fundamental theorem, there is a decision problem solvable in space S if and only if S is at least one of these functions (up to an additive constant). If some algorithm for this problem takes at least \lceil n^{1+\frac{1}{k}} \rceil space, then there is also a solution in \lceil n^{1+\frac{1}{k+1}} \rceil, and so forth.

Limitations and some loose ends

As exciting as I find this theorem, it has its limitations. Not all the speedup-related results seem to follow from it; for instance, the other Levin’s theorem doesn’t (or I couldn’t see how). Also, results like those given here and here about the measure, or topological category, of sets like the functions that have or do not have speedup, do not seem to follow. In fact, Seiferas and Meyer only prove a handful of “classic” results like Blum speedup, the Compression theorem and a Gap theorem. What new questions about complexity sets can be asked and answered using these techniques?

Another limitation is that for complexity measures other than space we do not have such tight results. So, for example, for Turing-machine time we are stuck with the not-so-tight hierarchy theorems proved by diagonalization, padding etc. (see references in the bibliography). Is this a problem with our proof methods? Or could some surprising speedup phenomenon be lurking there?

[Oct 16, 2012. Fixed error in last theorem]

About these ads

4 Responses to “A Fundamental Theorem on Optimality and Speedup (guest post by Amir Ben-Amram)”

  1. raingomm Says:

    Could you please elaborate “If some algorithm for this problem takes at least n^{1+1/k} space, then there is also a solution in n^{1+1/(1+k)}, and so forth.”?

    • amirbenamram Says:

      Better stated: “since every algorithm for the problem has to be lower-bounded by some e_k = n^{1+1/k}, then, given any such algorithm, there is also an asymptotically better one, matching the lower bound e_{k+1} = n^{1+1/(1+k)}”.

  2. Levin’s and Schnorr’s optimality results | Speedup in Computational Complexity Says:

    [...] Are there familiar computational problems with no best algorithm? « A Fundamental Theorem on Optimality and Speedup (guest post by Amir Ben-Amram) [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: