Archive for July, 2014

Limits on the recent progress on matrix multiplication algorithms (guest post by Yuval Filmus)

July 20, 2014

Virginia Vassilevska Williams stunned the world of computer science by presenting the first improvement to the matrix multiplication constant \omega in twenty years (later it was found that a more modest progress had been obtained independently and slightly earlier by Andrew James Stothers). The current world record is held by François Le Gall, who showed that \omega < 2.3728639 by perfecting the methods of Vassilevska Williams. For an exposition of this line of work, check out my recent lecture notes. For the full details, I recommend Le Gall’s paper and the journal version of Stothers’ thesis (with his advisor A. M. Davie). The recent progress heavily builds on classic work by Coppersmith and Winograd. Briefly, they came up with a basic identity known as the Coppersmith–Winograd identity. Applying Strassen’s laser method with their own ingenious construction (relying on Salem–Spencer sets), they obtained the bound \omega < 2.388. Applying their method to the tensor square of the basic identity, they obtained the improved bound \omega < 2.376. That’s where things had been standing since 1990, until Stothers managed to perform the computations for the fourth tensor power, obtaining the bound \omega < 2.373 in 2010. A year later (but independently), Vassilevska Williams analyzed the eighth tensor power and obtained the bound \omega < 2.3729. Further progress was obtained by Le Gall, who analyzed the sixteenth and thirty-second tensor powers, obtaining the current world record stated above. Although taking ever higher tensor powers improves the bound on \omega, the improvements are diminishing, and it seems safe to conjecture that taking the Nth tensor power does not yield the conjectured \omega = 2 as N \to \infty. However, how can we be certain? Perhaps the improvements slow down, but like a slowly divergent series, eventually go all the way down to 2? In the rest of the blog post, we describe a recent result of Andris Ambainis and myself, which shows that the best bound that this method can produce is \omega < 2.3078, for any value of N. In fact, our result allows for a wider class of methods which utilize the Coppersmith–Winograd identity, and hint to a new technique which can potentially lead to better algorithms. Very recently, Le Gall was able to use our methods to show that the best bound that can be obtained by taking an Nth tensor power of the Coppersmith–Winograd identity is \omega < 2.3725, and so the current analysis of the identity is quite tight. (more…)

The asymptotic sum inequality is not optimal (guest post by Yuval Filmus)

July 15, 2014

Matrix multiplication has become a popular topic recently. The main goal in this area is determining the value of the matrix multiplication constant \omega, which is the infimum over all \alpha such that two n\times n complex matrices can be multiplied using O(n^\alpha) arithmetic operations (addition, subtraction, multiplication, and division; arbitrary complex constants can appear as operands). Sloppy authors often define \omega as a minimum instead of an infimum, but it is not clear that any O(n^\omega) algorithm exists. Indeed, given the common belief that \omega = 2, a classic result of Ran Raz which gives a lower bound of \Omega(n^2\log n), assuming all the constants used are small, suggests that there is no O(n^\omega) algorithm (though there could be an asymptotically optimal algorithm, say one which runs in time O(n^2\log n)).

In this blog post we describe a result due to Coppersmith and Winograd that implies that a certain class of techniques provably cannot yield an optimal exponent, i.e. an O(n^\omega) algorithm, namely all algorithms which result from an invocation of Schönhage’s asymptotic sum inequality. This class includes Strassen’s original algorithm and all algorithms hence up to Strassen’s laser method, used in the celebrated algorithm of Coppersmith and Winograd, which corresponds to infinitely many invocations of the asymptotic sum inequality, and so is not subject to this limitation. The proof proceeds by showing how any identity (to which the asymptotic sum inequality can be applied) can be improved to another identity yielding a better bound on \omega.

(more…)