Monthly Archives: November 2010

Overview of Exponential Time Algorithms, Early 2009

This is an overview of the state of research in exponential time algorithms, which we submitted as a motivation for a Dagstuhl seminar in April 2009. See my longer organiser’s report:

The seminar, 10441, is now over, but I think the proposal is a competent snapshot of a rapidly evolving field.

A decade before NP-completeness became the lens through which Computer Science views computationally hard problems, beautiful algorithms were discovered that are much better than exhaustive search, for example Bellman’s 1962 dynamic programming treatment of the Traveling Salesman problem and Ryser’s 1963 inclusion–exclusion formula for the permanent.

Today we know that all NP-hard problems are unlikely to admit polynomial-time algorithms, yet NP-hard problems must be solved, in some way, for everything from manufacturing and supply-chain optimization to railroad timetabling. Approaches include approximation algorithms, heuristics, average-case analysis, and exact exponential-time algorithms: all are essential. While all NP-complete problems are equivalent from the polynomial-time perspective, their exponential-time properties vary widely. Which problems are easiest or hardest? What are the promising algorithmic techniques? What are the connections with parametrized complexity? How fast an algorithm can we find? What about complexity lower bounds?

Work addressing such questions, both from the algorithmic and complexity theoretic sides, has become known as exact complexity. Despite significant progress, the area is still fairly new and many fundamental problems remain open. Where the approximation algorithms field, for example, has unifying algorithmic techniques such as LP rounding and semidefinite programming, and hardness techniques from probabilistically checkable proofs and the Unique Games conjectures, much exact algorithms work is still specific to a particular NP-complete problem: powerful unified techniques are just emerging.

Exciting new results and directions have been established by scientists on several continents, with important contributions coming from young researchers such as Williams and Traxler. The purpose of this workshop is to accelerate developments in this late-blooming field. Below, we outline several new results and promising directions.

The Tutte polynomial of an n-vertex, m-edged graph can trivially be evaluated in time O*(2m), but no vertex-parameterized algorithm is obvious. The Potts (q-coloring) partition function can trivially be evaluated in time O*(qn), but it is not obvious if one can remove the dependence on q. The Fortuin–Kasteleyn model from statistical physics generalizes both, and a breakthrough result of Björklund, Husfeldt, Kaski, and Koivisto [FOCS 2006, STOC 2007, FOCS 2008] shows how to evaluate it using the inclusion–exclusion method in time O*(2n). It is an intriguing question as to how far these techniques could be extended.

Recently, the color-coding technique of Alon, Yuster, and Zwick [JACM 1995] has been extended by introducing algebraic structures that yield faster fixed parameter tractable algorithms. Koutis [ICALP 2008] uses “vector coding” for a randomized O*(23k/2) algorithm for the k-Path problem, and Williams [IPL 2009] improves this to O*(2k). Such algorithms from group algebra are a promising direction for further exploration.

Branch-and-reduce is one of the most frequently used methods for solving \np-hard problems, but current analyses of such algorithms may be overly pessimistic. Fomin, Grandoni and Kratsch [ICALP 2005, SODA 2006] used a measure and conquer framework to establish simple and fast algorithms to solve the Minimum Dominating Set and the Maximum Independent Set problem. By now measure and conquer analysis has had an enormous impact. This and related methods, Eppstein’s quasiconvex analysis [SODA 2004] and Scott and Sorkin’s linear programming method [Random 2005], have become indispensable, but a need remains for further improvements.

Faster algorithms, notably for Maximum Independent Set, have resulted from computer-produced graphical reductions and case analysis. Can these automated techniques be put on a more general theoretical level, and improved? Can similar automation be applied to logic-based branching rules such as the “clause learning” of Kulikov and Kutzkov [CSR 2007]? What about lower bounds on such local methods?

Exponential-time and other approaches may be combined. Scott and Sorkin’s [CPC 2006] average-case analysis of an exponential-time algorithm shows that Max 2-CSP is solvable in expected linear time on random constraint graphs below the giant-component threshold. It would be interesting to obtain similar results for other problems. Cygan, Kowalik, Pilipczuk and Wykurz [2008] explore exponential-time approximation algorithms trading off the running time and approximation ratio for various problems, an approach well worth further investigation. Hybrid algorithms, introduced by Vassilevska, Williams and Woo [SODA 2006], are exemplified for Minimum Bandwidth: the fastest exact algorithm known takes time O*(5n) and the best polynomial-time approximation ratio is roughly O(log3 n), but there is a hybrid algorithm that, depending on the input, produces either the exact minimum in time roughly O*(4n) or an O(log2.5 n) approximation in polynomial time.

Some other approaches merit mention. Horowitz and Sahni’s O*(2n/2) Knapsack algorithm [JACM 1974] is an early example of the approach of reducing a hard problem to one that is easy but exponentially large. Williams’ O*(1.74n) algorithm for Max 2-CSP [ICALP 2004] is the only nontrivial algorithm known for dense instances. Koivisto’s ring extension of this algorithm [IPL 2006], the Tutte and k-Path work previously mentioned, and Scott and Sorkin’s ring extension of other Max 2-CSP algorithms [TALG 2009] show the power of algebraic approaches.

Despite these algorithmic successes, there may be limits to what is possible. Impagliazzo, Paturi and Zane [FOCS 1998] initiate a complexity theory of exact algorithms, using subexponential-time Turing reductions. They introduce the Exponential-Time Hypothesis (ETH), viz., that the best exact algorithm for 3-SAT has exponential complexity 2cn where c>0. Assuming ETH they prove that the best exponent sequence ck for k-SAT must be an increasing sequence [CCC 1999]. Using similar techniques, Traxler [IWPEC 2008] shows that for 2-CSPs over k-valued variables, assuming ETH, any algorithm’s running time must depend strongly on k; this is in sharp contrast to the earlier-mentioned result that k-coloring can be solved in time O*(2n), independent of k. These results call for a vigorous investigation of the complexity-theoretic limitations of exact algorithms.

The authors are Dieter Kratsch, Ramamohan Paturi, Gregory Sorkin, and me.

Organising a Dagstuhl Seminar (Dagstuhl 10441)

Two years ago, at the Dagstuhl seminar 08431: Moderately Exponential Time Algorithms I was asked to join Dieter Kratsch, Gregory Sorkin, and Ramamohan Paturi to propose a new seminar for 2010.

Here’s how that works.

Dagstuhl expects a written proposal for the proposed seminar, see Dagstuhl’s submission guidelines.

By February 2009, we had an early draft of the proposal ready, and polished it in the following weeks. The most rewarding part was writing the scientific motivation, which turned out as a really good overview of the field of exponential time computation as of Winter 2008. I made a separate blog post of it:

Total number of emails in my inbox in 2009 and 2010 in connection with organising Dagstuhl Seminar 10441, November 2010

The other big task is the list of people to invite. Dagstuhl seminars are by invitation, and a preliminary guest list is submitted by the organisers together with the proposal. Dagstuhl has room for 50 people or so, and we were asked to concoct a non-binding list of 80 potential invitees.

We submitted our proposal time, in early April. I can count some 90 emails between the organisers in my inbox for that part of the process. After the Summer 2009, we got a positive reply from Dagstuhl, together with dates (November 2010).

Next task: assemble the true list of 80 invitees, so that the Dagstuhl office could send out the invitations roughly one year in advance. Also, this list should be partitioned into two rounds of 60 and 20 people, respectively. Dagstuhl then sent out the first 60, using the others when they got refusals back peu à peu. All communication was handled conveniently by the very professional Dagstuhl officers, who regularly kept us up-to-date about the invitation status.

Our hope was that everybody on our 80 people list would be invited, but this turned out to be far from the case. Apparently, acceptance was very high, leaving many very good people uninvited.

Dagstuhl-related emails in late 2009: Maybe 20. Letters of invitation arrived in people’s inboxes January 2010. Yes, that’s physical mail. It’s like the Internet, but made out of trees.

In the first half of 2010, I count 52 emails between the organisers, mainly about individual guests, such as people who hadn’t replied, though they told somebody over coffee they’d surely come, etc. By July, we seemed to have 49 acceptances, at which point Dagstuhl would not invite more people.

Most of the remaining organisation tasks, again, were handled by Dagstuhl: Setting up a web page for Dagstuhl Seminar 10441, sending out travel information, etc.

Among the organisers, we started to talk about the seminar format: how many talks, how to select and schedule, etc., leading to a respectable number of 68 more emails. Most notably, the very last days before the meeting became hectic because of a number of last-minute cancellations.

The seminar itself was stress-free, and I really enjoyed myself. There was some extra organising work with respect to scheduling, arranging an excursion, etc., but for almost everything one can rely fully on the very experienced Dagstuhl environment. It feels extremely reassuring that every little situation that comes up has been handled countless times before, be it by the Dagstuhl staff, the local taxi company, or the restaurant owner arranging the wine tasting.

After the meeting, we wrote a brief thank-you note to the attendees. What remains now is some wrap-up work: a final report, list of open problems, etc.

Oh, and think about the next proposal.

The Deafening Silence of Truth

Ryan Williams’s breakthrough result that ACC0 does not contain NEXP was announced on Richard Lipton’s blog on 8 November 2010.

This is exactly 3 months after a superficially similar announcement, Vinay Deolalikar’s attempt to show that P does not contain NP, which created unprecedented hubbub and excitement on the internet. (See my previous post P vs NP vs PR.)

Few researchers believed Deolalikar’s manuscript. By contrast, the result of Williams has been immediately welcomed with universal astonishment and respect. Our jaws collectively dropped. But they dropped in silence:

Number of comments on Richard Lipton’s blog on Deolalikar’s and WIlliams’s manuscripts 3 days after their announcement, respectively

Internet attention does not seem to be a good way of measuring validity or relevance. There may be another message here about philosophy of science: Schopenhauer could hardly have been more wrong when he said,

All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.

Now, back to reading Ryan’s paper. Silently.

Subexponential time run (Dagstuhl 10441)

Returning from the subexponential run wet and happy, from left to right: Mike Saks, Edward Hirsch, Paolo Codenotti, me, Mohan Paturi, and Marek Cygan. Photo by Hans Bodlaender.

I am spending the week at the Dagstuhl meeting 10441: Exact Complexity of NP-hard Problems, which I am also organising.

The site, Schloss Dagstuhl (Dagstuhl @ Wikipedia) is a wonderful place with lots of small details aimed at computer scientists. For example, the street address is Konrad Zuse-Straße. (A full list of Konrad Zuse-Straßen in Germany is maintained by his son, Horst Zuse.)

More whimsically, the marked running trial around Dagstuhl is called n2 [PDF]. As Dagstuhl attendees can attest, the route is pretty difficult despite its relatively short length of 5 kilometers. However, I felt that for a workshop on exponential time computation, polynomial “running times” are not ambitious enough, so I organised a longer run, based on the Bardenbacher-Fels-Weg (13–15 kilometers, depending on which map you consult). This hiking trail is currently labelled WA4 by the Wadern tourist authorities, but I am sure if I get the entire Max-Planck Gesellschaft behind me, I can get it renamed to exp(log2 n), or something similarly attractive.

In spite of a steady drizzle and the competition from attractive alternative hiking and biking outings, five attendees joined me on this, for a very satisfying run. Maybe next time we’ll arrange a half-marathon. Or maybe not.