Virginia Vassilevska Williams stunned the world of computer science by presenting the first improvement to the matrix multiplication constant in twenty years (later it was found that a more modest progress had been obtained independently and slightly earlier by Andrew James Stothers). The current world record is held by François Le Gall, who showed that by perfecting the methods of Vassilevska Williams. For an exposition of this line of work, check out my recent lecture notes. For the full details, I recommend Le Gall’s paper and the journal version of Stothers’ thesis (with his advisor A. M. Davie). The recent progress heavily builds on classic work by Coppersmith and Winograd. Briefly, they came up with a basic identity known as the *Coppersmith–Winograd identity*. Applying Strassen’s laser method with their own ingenious construction (relying on Salem–Spencer sets), they obtained the bound . Applying their method to the *tensor square* of the basic identity, they obtained the improved bound . That’s where things had been standing since 1990, until Stothers managed to perform the computations for the *fourth tensor power*, obtaining the bound in 2010. A year later (but independently), Vassilevska Williams analyzed the *eighth tensor power* and obtained the bound . Further progress was obtained by Le Gall, who analyzed the *sixteenth* and *thirty-second** tensor powers*, obtaining the current world record stated above. Although taking ever higher tensor powers improves the bound on , the improvements are diminishing, and it seems safe to conjecture that taking the th tensor power does not yield the conjectured as . However, how can we be certain? Perhaps the improvements slow down, but like a slowly divergent series, eventually go all the way down to ? In the rest of the blog post, we describe a recent result of Andris Ambainis and myself, which shows that the best bound that this method can produce is , for *any* value of . In fact, our result allows for a wider class of methods which utilize the Coppersmith–Winograd identity, and hint to a new technique which can potentially lead to better algorithms. Very recently, Le Gall was able to use our methods to show that the best bound that can be obtained by taking an th tensor power of the Coppersmith–Winograd identity is , and so the current analysis of the identity is quite tight. (more…)

## Archive for the ‘Matrix multiplication’ Category

### Limits on the recent progress on matrix multiplication algorithms (guest post by Yuval Filmus)

July 20, 2014### The asymptotic sum inequality is not optimal (guest post by Yuval Filmus)

July 15, 2014Matrix multiplication has become a popular topic recently. The main goal in this area is determining the value of the matrix multiplication constant , which is the infimum over all such that two complex matrices can be multiplied using arithmetic operations (addition, subtraction, multiplication, and division; arbitrary complex constants can appear as operands). Sloppy authors often define as a minimum instead of an infimum, but it is not clear that any algorithm exists. Indeed, given the common belief that , a classic result of Ran Raz which gives a lower bound of , assuming all the constants used are small, suggests that there is no algorithm (though there could be an asymptotically optimal algorithm, say one which runs in time ).

In this blog post we describe a result due to Coppersmith and Winograd that implies that a certain class of techniques *provably* cannot yield an optimal exponent, i.e. an algorithm, namely all algorithms which result from an invocation of Schönhage’s *asymptotic sum inequality*. This class includes Strassen’s original algorithm and all algorithms hence up to Strassen’s *laser method*, used in the celebrated algorithm of Coppersmith and Winograd, which corresponds to infinitely many invocations of the asymptotic sum inequality, and so is not subject to this limitation. The proof proceeds by showing how any identity (to which the asymptotic sum inequality can be applied) can be improved to another identity yielding a better bound on .

### On the recent progress on matrix multiplication algorithms (guest post by Virginia Vassilevska Williams)

September 22, 2012A central question in the theory of algorithms is to determine the constant , called the exponent of matrix multiplication. This constant is defined as the infimum of all real numbers such that for all there is an algorithm for multiplying matrices running in time . Until the late 1960s it was believed that , i.e. that no improvement can be found for the problem. In 1969, Strassen surprised everyone by showing that two matrices can be multiplied in time. This discovery spawned a twenty-year-long extremely productive time in which the upper bound on was gradually lowered to . After a twenty-year stall, some very recent research has brought the upper bound down to .

**Bilinear algorithms and recursion.**

Strassen’s approach was to exploit the inherent recursive nature of matrix multiplication: the product of two matrices can be viewed as the product of two matrices, the entries of which are matrices. Suppose that we have an algorithm *ALG* that runs in time and multiplies two matrices. Then one can envision obtaining a fast recursive algorithm for multiplying matrices (for any integer ) as well: view the matrices as matrices the entries of which are matrices; then multiply the matrices using *ALG* and when *ALG* requires us to multiply two matrix entries, recurse.

This approach only works, provided that the operations that *ALG* performs on the matrix entries make sense as matrix operations: e.g. entry multiplication, taking linear combination of entries etc. One very general type of such algorithm is the so called *bilinear* algorithm: Given two matrices and , compute products

i.e. take possibly different linear combinations of entries of and multiply each one with a possibly different linear combination of entries of . Then, compute each entry of the product as a linear combination of the : .

Given a bilinear algorithm *ALG* for multiplying two matrices (for constant ) that computes products , the recursive approach that multiplies matrices using *ALG* gives a bound . To see this, notice that the number of additions that one has to do is no more than : at most to compute the linear combinations for each and at most for each of the outputs . Since matrix addition takes linear time in the matrix size, we have a recurrence of the form .

As long as we get a nontrivial bound on . Strassen’s famous algorithm used and thus showing that . A lot of work went into getting better and better “base algorithms” for varying constants . Methods such as Pan’s method of trilinear aggregation were developed. This approach culminated in Pan’s algorithm (1978) for multiplying matrices that used products and hence showed that **.**

**Approximate algorithms and Schonhage’s theorem.**

A further step was to look at more general algorithms, so called *approximate* bilinear algorithms. In the definition of a bilinear algorithm the coefficients were constants. In an approximate algorithm, these coefficients can be formal linear combinations of integer powers of an indeterminate, (e.g. ). The entries of the product are then only “approximately” computed, in the sense that , where the term is a linear combination of *positive* powers of . The term “approximate” comes from the intuition that if you set to be close to , then the algorithm would get the product almost exactly.

Interestingly enough, Bini et al. (1980) showed that when dealing with the asymptotic complexity of matrix multiplication, approximate algorithms suffice for obtaining bounds on . This is not obvious! What Bini et al. show, in a sense, is that as the size of the matrices grows, the “approximation” part can be replaced by a sort of bookkeeping which does not present an overhead asymptotically. The upshot is that if there is an *approximate* bilinear algorithm that computes products to compute the product of two matrices, then .

Bini et al. (1979) gave the first approximate bilinear algorithm for a matrix product. Their algorithm used entry products to multiply a matrix with a matrix. Although this algorithm is for rectangular matrices, it can easily be converted into one for square matrices: a matrix is a matrix with entries that are matrices with entries that are matrices, and so multiplying matrices can be done recursively using Bini et al.’s algorithm three times, taking entry products. Hence .

Schonhage (1981) developed a sophisticated theory involving the bilinear complexity of rectangular matrix multiplication that showed that approximate bilinear algorithms are even more powerful. His paper culminated in something called the Schonhage -theorem, or the asymptotic sum inequality. This theorem is one of the most useful tools in designing and analyzing matrix multiplication algorithms.

Schonhage’s -theorem says roughly the following. Suppose we have several instances of matrix multiplication, each involving matrices of possibly different dimensions, and we are somehow able to design an approximate bilinear algorithm that solves all instances and uses fewer products than would be needed when computing each instance separately. Then this bilinear algorithm can be used to multiply (larger) square matrices and would imply a nontrivial bound on .

What is interesting about Schonhage’s theorem is that it is believed that when it comes to *exact* bilinear algorithms, one cannot use fewer products to compute several instances than one would use by just computing each instance separately. This is known as Strassen’s additivity conjecture. Schonhage showed that the additivity conjecture is false for *approximate* bilinear algorithms. In particular, he showed that one can approximately compute the product of a by a vector and the product of a by a vector together using only entry products, whereas any exact bilinear algorithm would need at least products. His theorem then implied , and this was a huge improvement over the previous bound of Bini et al.

**Using fast solutions for problems that are not matrix multiplications.**

The next realization was that there is no immediate reason why the “base algorithm” that we use for our recursion has to compute a matrix product at all. Let us focus on the following family of computational problems. We are given two vectors and and we want to compute a third vector . The dependence of on and is given by a three-dimensional tensor as follows: . The vector is a bilinear form. The tensor can be arbitrary, but let us focus on the case where . Notice that completely determines the computational problem. Some examples of such bilinear problems are polynomial multiplication and of course matrix multiplication. For polynomial multiplication, if and only if , and for matrix multiplication, if and only if and .

The nice thing about these bilinear problems is that one can easily extend the theory of bilinear algorithms to them. A bilinear algorithm computing a problem instance for tensor computes products of the form and then sets . Here, an algorithm is nontrivial if the number of products that it computes is less than the number of positions where the tensor is nonzero.

In order to be able to talk about recursion for general bilinear problems, it is useful to define the tensor product of two tensors and : . Thus, the bilinear problem defined by can be viewed as a bilinear problem defined by , where each product is actually itself a bilinear problem defined by .

This allows one to compute an instance of the problem defined by using an algorithm for and an algorithm for . One can similarly define the tensor power of a tensor as tensor-multiplying by itself times. Then any bilinear algorithm computing an instance defined by using entry products can be used recursively to compute the tensor power of using products, just as in the case of matrix multiplication.

A crucial development in the study of matrix multiplication algorithms was the discovery that sometimes algorithms for bilinear problems that do not look at all like matrix products can be converted into matrix multiplication algorithms. This was first shown by Strassen in the development of his “laser method” and was later exploited in the work of Coppersmith and Winograd (1987,1990). The basic idea of the approach is as follows.

Consider a bilinear problem for which you have a really nice approximate algorithm *ALG* that uses entry products. Take the tensor power of (for large ), and use* ALG* recursively to compute using entry products. is a bilinear problem that computes a long vector from two long vectors and . Suppose that we can embed the product of two matrices and into as follows: we put each entry of into some position of and set all other positions of to , we similarly put each entry of into some position of and set all other positions of to , and finally we argue that each entry of the product is in some position of the computed vector (all other entries are ). Then we would have a bilinear algorithm for computing the product of two matrices using entry products, and hence .

The goal is to make as large of a function of as possible, thus minimizing the upper bound on .

Strassen’s laser method and Coppersmith and Winograd’s paper, and even Schonhage’s -theorem, present ways of embedding a matrix product into a large tensor power of a different bilinear problem. The approaches differ in the starting algorithm and in the final matrix product embedding. We’ll give a very brief overview of the Coppersmith-Winograd algorithm.

**The Coppersmith-Winograd algorithm.**

The bilinear problem that Coppersmith and Winograd start with is as follows. Let be an integer. Then we are given two vectors and of length and we want to compute a vector of length defined as follows:

,

for , and .

Notice that is far from being a matrix product. However, it is related to matrix products:

which is the inner product of two -length vectors,

, , and , which are three scalar products, and

the two matrix products computing and for which are both products of a vector with a scalar.

If we could somehow convert Coppersmith and Winograd’s bilinear problem into one of computing these products as *independent *instances, then we would be able to use Schonhage’s -theorem. Unfortunately, however, the matrix products are merged in a strange way, and it is unclear whether one can get anything meaningful out of an algorithm that solves this bilinear problem.

Coppersmith and Winograd develop a multitude of techniques to show that when one takes a large tensor power of the starting bilinear problem, one can actually decouple these merged matrix products, and one can indeed apply the -theorem. The -theorem then gives the final embedding of a large matrix product into a tensor power of the original construction, and hence defines a matrix multiplication algorithm.

Their approach combines several impressive ingredients: sets avoiding -term arithmetic progressions, hashing and the probabilistic method. The algorithm computing their base bilinear problem is also impressive. The number of entry products it computes is , which is exactly the length of the output vector ! That is, their starting algorithm is optimal for the particular problem that they are solving.

What is not optimal, however, is the analysis of how good of a matrix product algorithm one can obtain from the base algorithm. Coppersmith and Winograd noticed this themselves: They first applied their analysis to the original bilinear algorithm and obtained an embedding of an matrix product into the -tensor power of the bilinear problem for some explicit function . (Then they took to go to infinity and obtained .) Then they noticed, that if one applies the analysis to the second tensor power of the original construction, then one obtains an embedding of an matrix product into the same -tensor power, where . That is, although one is considering embeddings into the same () tensor power of the construction, the analysis crucially depends on which tensor power of the construction you start from! This led to the longstanding bound . Coppersmith and Winograd left as an open problem what bound one can get if one starts from the third or larger tensor powers.

**The recent improvements on .**

It seems that many researchers attempted to apply the analysis to the third tensor power of the construction, but this somehow did not improve bound on . Because of this and since each new analysis required a lot of work, the approach was abandoned, at least until 2010. In 2010, Andrew Stothers carried through the analysis on the fourth tensor power and discovered that the bound on can be improved to .

As mentioned earlier, the original Coppersmith-Winograd bilinear problem was related to different matrix multiplication problems that were merged together. The tensor power of the bilinear problem is similarly composed of merged instances of simpler bilinear ptoblems, however these instances may no longer be matrix multiplications. When applying a Coppersmith-Winograd-like analysis to the tensor power, there are two main steps.

The first step involves analyzing each of the bilinear problems, intuitively in terms of how close they are to matrix products; there is a formal definition of the similarity measure called the *value *of the bilinear form. The second step defines a family of matrix product embeddings in the tensor power in terms of the values. These embeddings are defined via some variables and constraints, and each represents some matrix multiplication algorithm. Finally, one solves a nonlinear optimization program to find the best among these embeddings, essentially finding the best matrix multiplication algorithm in the search space.

Both the Coppersmith-Winograd paper and Stothers’ thesis perform an entirely new analysis for each new tensor power . The main goal of my work was to provide a general framework so that the two steps of the analysis do not have to be redone for each new tensor power . My paper first shows that the first step, the analysis of each of the values, can be completely automated by solving linear programs and simple systems of linear equations. This means that instead of proving theorems one only needs to solve linear programs and linear systems, a simpler task. My paper then shows that the second step of the analysis, the theorem defining the search space of algorithms, can also be replaced by just solving a simple system of linear equations. (Amusingly, the fact that matrix multiplication algorithms can be used to solve linear equations implies that good matrix multiplication algorithms can be used to search for better matrix multiplication algorithms.) Together with the final nonlinear program, this presents a fully automated approach to performing a Coppersmith-Winograd-like analysis.

After seeing Stothers’ thesis in the summer of last year, I was impressed by a shortcut he had used in the analysis of the values of the fourth tensor power. This shortcut gave a way to use recursion in the analysis, and I was able to incorporate it in my analysis to show that the number of linear programs and linear systems one would need to solve to compute the values for the tensor power drops down to , at least when is a power of . This drop in complexity allowed me to analyze the tensor power, thus obtaining an improvement in the bound of : .

There are several lingering open questions. The most natural one is, how does the bound on change when applying the analysis to higher and higher tensor powers. I am currently working together with a Stanford undergraduate on this problem: we’ll apply the automated approach to several consecutive powers, hoping to uncover a pattern so that one can then mathematically analyze what bounds on can be proven with this approach.

A second open question is more far reaching: the Coppersmith-Winograd analysis is not optimal– in a sense it computes an approximation to the best embedding of a matrix product in the tensor power of their bilinear problem. What is the optimal embedding? Can one analyze it mathematically? Can one automate the search for it?

Finally, I am fascinated by automating the search for algorithms for problems. In the special case of matrix multiplication we were able to define a search space of algorithms and then use software to optimize over this search space. What other computational problems can one approach in this way?

### Combinatorial problems related to matrix multiplication (guest post by Chris Umans)

September 10, 2012Determining the exponent of matrix multiplication, , is one of the most prominent open problems in algebraic complexity. Strassen famously showed that in 1969, and after a sequence of works culminating in work of Stothers and Williams the best current upper bound is . It is believed that indeed , meaning that there is a family of algorithms with running time for every . In this sense, if , matrix multiplication exhibits “speedup”: there is no single best asymptotic algorithm, only a sequence of algorithms having running times , with approaching . In fact Coppersmith and Winograd showed in 1982 that this phenomenon occurs whatever the true value of is (two, or larger than two), a result that is often summarized by saying “the exponent of matrix multiplication is an infimum, not a minimum.”

Despite the algebraic nature of the matrix multiplication problem, many of the suggested routes to proving are combinatorial. This post is about connections between one such combinatorial conjecture (the “Strong Uniquely Solvable Puzzle” conjecture of Cohn, Kleinberg, Szegedy and Umans) and some more well-known combinatorial conjectures. These results appear in this recent paper with Noga Alon and Amir Shpilka.

### The cap set conjecture, a strengthening, and a weakening

We start with a well-known, and stubborn, combinatorial problem, the “cap-set problem.” A cap-set in is a subset of vectors in with the property that no three vectors (not all the same) sum to 0. The best lower bound is due to Edel. Let us denote by the “cap set conjecture” the assertion that one can actually find sets of size . It appears that there is no strong consensus on whether this conjecture is true or false, but one thing that we do know is that it is a popular blog topic (see here, here, here, and the end of this post).

Now it happens that the only triples of elements of that sum to 0 are and — triples that are “all the same, or all different.” So the cap set conjecture can be rephrased as the conjecture that there are large subsets of vectors in so that for any in the set (not all equal), there is some coordinate such that are “not all the same, and not all different.” Generalizing from this interpretation, we arrive at a family of much stronger assertions, one for each : let’s denote by the “ conjecture” the assertion that there are subsets of vectors in , of size with the property that for any in the set (not all equal), there is some coordinate such that are “not all the same, and not all different.” Such sets of size in imply size -size sets of this type in by viewing each symbol in as a block of symbols in , so the conjecture is stronger (i.e. it implies the cap-set conjecture). Indeed, if you play around with constructions you can see that as gets larger it seems harder and harder to have large sets avoiding triples of vectors for which every coordinate has “all different.” Thus one would probably guess that the conjecture is false.

One of the results in our paper is that the conjecture is in fact *equivalent* to falsifying the following well-known sunflower conjecture of Erdos-Szemeredi: there exists an such that any family of at least subsets of contains a 3-sunflower (i.e., three sets whose pairwise intersections are all equal).

So the intuition that the conjecture is false agrees with Erdos and Szemeredi’s intuition, which is a good sign.

Now let’s return to the cap-set conjecture in . Being a cap set is a condition on *all* triples of vectors in the set. If one restricts to a condition on only some of the triples of vectors, then a construction becomes potentially easier. We will be interested in a “multicolored” version in which vectors in the set are assigned one of three colors (say red, blue, green), and we only require that no triple of vectors sums to 0 with being red, being blue, and being green. But a moment’s thought reveals that the problem has become too easy: one can take (say) the red vectors to be all vectors beginning with , the green vectors to be all vectors beginning with , and the the blue vectors to be all vectors beginning with . A solution that seems to recover the original flavor of the problem, is to insist that the vectors come from a collection of red, green, blue triples with ; we then require that every red, green, blue triple *except those in the original collection* not sum to 0. So, let’s denote by the “multicolored cap-set conjecture” the assertion that there are subsets of *triples of vectors* from of size , with each triple summing to 0, and with the property that for any three triples in the set (not all equal), .

If is a cap set in , then the collection of triples constitutes a multicolored cap-set of the same size, so the multicolored version of the conjecture is indeed weaker (i.e. it is implied by the cap-set conjecture).

### A matrix multiplication conjecture between the two

The SUSP conjecture of Cohn, Kleinberg, Szegedy, and Umans is the following: there exist subsets of *triples of vectors* from of size , with each triple summing (in the integers) to the all-ones vector, and with the property that for any three triples in the set (not all equal), there is a coordinate such that .

The mysterious constant comes from the fact , and it is easy to see that one cannot have multiple triples with the same “ vector” (or , or …). More specifically, “most” triples that sum to the all-ones vector are balanced (meaning that each have weight ), and there can be at most balanced triples, without repeating an vector. So the conjecture is that there are actually subsets satisfying the requirements, whose sizes approach this easy upper bound.

If in the statement of the SUSP conjecture, one replaces “there is a coordinate such that ” with “there is a coordinate such that ” one gets the weaker “Uniquely Solvable Puzzle” conjecture instead of the “Strong Uniquely Solvable Puzzle” conjecture. Here one is supposed to interpret the triples as triples of “puzzle pieces” that fit together (i.e. their 1’s exactly cover the coordinates without overlap); the main requirement then ensures that no other way of “assembling the puzzle” fits together in this way, thus it is “uniquely solvable.” It is a consequence of the famous Coppersmith-Winograd paper that the Uniquely Solvable Puzzle conjecture is indeed true. Cohn, Kleinberg, Szegedy and Umans showed that if the stronger SUSP conjecture is true, then .

Two of the main results in our paper are that (1) the conjecture implies the SUSP conjecture, and (2) the SUSP conjecture implies the multicolored cap-set conjecture.

So by (1), *disproving the SUSP* conjecture is as hard as proving the Erdos-Szemeredi conjecture (which recall is equivalent to disproving the conjecture). Of course we hope the SUSP conjecture is true, but if it is not, it appears that will be difficult to prove.

And by (2), proving the SUSP conjecture entails proving the multicolored cap-set conjecture. So apart from being a meaningful weakening of the famous cap-set conjecture in , the multicolored cap-set conjecture can be seen as a “warm-up” to (hopefully) proving the SUSP conjecture and establishing . As a start, we show in our paper a lower bound of for multicolored cap-sets, which beats the lower bound of Edel for ordinary cap sets.

Returning to “speedup,” the topic of this blog, notice that the SUSP conjecture, as well as the cap-set, the multicolored cap-set, and the conjectures, all assert that there exist sets of size with certain properties, for various constants . In all cases is easily ruled out, and so all one can hope for is a sequence of sets, one for each , whose sizes approach as grows. If the SUSP conjecture is true, then it is this sequence of sets that directly corresponds to a sequence of matrix multiplication algorithms with running times for approaching . This would then be a concrete manifestation of the “speedup” phenomenon for matrix multiplication.