Overview of Exponential Time Algorithms, Early 2009

This is an overview of the state of research in exponential time algorithms, which we submitted as a motivation for a Dagstuhl seminar in April 2009. See my longer organiser’s report:

The seminar, 10441, is now over, but I think the proposal is a competent snapshot of a rapidly evolving field.

A decade before NP-completeness became the lens through which Computer Science views computationally hard problems, beautiful algorithms were discovered that are much better than exhaustive search, for example Bellman’s 1962 dynamic programming treatment of the Traveling Salesman problem and Ryser’s 1963 inclusion–exclusion formula for the permanent.

Today we know that all NP-hard problems are unlikely to admit polynomial-time algorithms, yet NP-hard problems must be solved, in some way, for everything from manufacturing and supply-chain optimization to railroad timetabling. Approaches include approximation algorithms, heuristics, average-case analysis, and exact exponential-time algorithms: all are essential. While all NP-complete problems are equivalent from the polynomial-time perspective, their exponential-time properties vary widely. Which problems are easiest or hardest? What are the promising algorithmic techniques? What are the connections with parametrized complexity? How fast an algorithm can we find? What about complexity lower bounds?

Work addressing such questions, both from the algorithmic and complexity theoretic sides, has become known as exact complexity. Despite significant progress, the area is still fairly new and many fundamental problems remain open. Where the approximation algorithms field, for example, has unifying algorithmic techniques such as LP rounding and semidefinite programming, and hardness techniques from probabilistically checkable proofs and the Unique Games conjectures, much exact algorithms work is still specific to a particular NP-complete problem: powerful unified techniques are just emerging.

Exciting new results and directions have been established by scientists on several continents, with important contributions coming from young researchers such as Williams and Traxler. The purpose of this workshop is to accelerate developments in this late-blooming field. Below, we outline several new results and promising directions.

The Tutte polynomial of an n-vertex, m-edged graph can trivially be evaluated in time O*(2m), but no vertex-parameterized algorithm is obvious. The Potts (q-coloring) partition function can trivially be evaluated in time O*(qn), but it is not obvious if one can remove the dependence on q. The Fortuin–Kasteleyn model from statistical physics generalizes both, and a breakthrough result of Björklund, Husfeldt, Kaski, and Koivisto [FOCS 2006, STOC 2007, FOCS 2008] shows how to evaluate it using the inclusion–exclusion method in time O*(2n). It is an intriguing question as to how far these techniques could be extended.

Recently, the color-coding technique of Alon, Yuster, and Zwick [JACM 1995] has been extended by introducing algebraic structures that yield faster fixed parameter tractable algorithms. Koutis [ICALP 2008] uses “vector coding” for a randomized O*(23k/2) algorithm for the k-Path problem, and Williams [IPL 2009] improves this to O*(2k). Such algorithms from group algebra are a promising direction for further exploration.

Branch-and-reduce is one of the most frequently used methods for solving \np-hard problems, but current analyses of such algorithms may be overly pessimistic. Fomin, Grandoni and Kratsch [ICALP 2005, SODA 2006] used a measure and conquer framework to establish simple and fast algorithms to solve the Minimum Dominating Set and the Maximum Independent Set problem. By now measure and conquer analysis has had an enormous impact. This and related methods, Eppstein’s quasiconvex analysis [SODA 2004] and Scott and Sorkin’s linear programming method [Random 2005], have become indispensable, but a need remains for further improvements.

Faster algorithms, notably for Maximum Independent Set, have resulted from computer-produced graphical reductions and case analysis. Can these automated techniques be put on a more general theoretical level, and improved? Can similar automation be applied to logic-based branching rules such as the “clause learning” of Kulikov and Kutzkov [CSR 2007]? What about lower bounds on such local methods?

Exponential-time and other approaches may be combined. Scott and Sorkin’s [CPC 2006] average-case analysis of an exponential-time algorithm shows that Max 2-CSP is solvable in expected linear time on random constraint graphs below the giant-component threshold. It would be interesting to obtain similar results for other problems. Cygan, Kowalik, Pilipczuk and Wykurz [2008] explore exponential-time approximation algorithms trading off the running time and approximation ratio for various problems, an approach well worth further investigation. Hybrid algorithms, introduced by Vassilevska, Williams and Woo [SODA 2006], are exemplified for Minimum Bandwidth: the fastest exact algorithm known takes time O*(5n) and the best polynomial-time approximation ratio is roughly O(log3 n), but there is a hybrid algorithm that, depending on the input, produces either the exact minimum in time roughly O*(4n) or an O(log2.5 n) approximation in polynomial time.

Some other approaches merit mention. Horowitz and Sahni’s O*(2n/2) Knapsack algorithm [JACM 1974] is an early example of the approach of reducing a hard problem to one that is easy but exponentially large. Williams’ O*(1.74n) algorithm for Max 2-CSP [ICALP 2004] is the only nontrivial algorithm known for dense instances. Koivisto’s ring extension of this algorithm [IPL 2006], the Tutte and k-Path work previously mentioned, and Scott and Sorkin’s ring extension of other Max 2-CSP algorithms [TALG 2009] show the power of algebraic approaches.

Despite these algorithmic successes, there may be limits to what is possible. Impagliazzo, Paturi and Zane [FOCS 1998] initiate a complexity theory of exact algorithms, using subexponential-time Turing reductions. They introduce the Exponential-Time Hypothesis (ETH), viz., that the best exact algorithm for 3-SAT has exponential complexity 2cn where c>0. Assuming ETH they prove that the best exponent sequence ck for k-SAT must be an increasing sequence [CCC 1999]. Using similar techniques, Traxler [IWPEC 2008] shows that for 2-CSPs over k-valued variables, assuming ETH, any algorithm’s running time must depend strongly on k; this is in sharp contrast to the earlier-mentioned result that k-coloring can be solved in time O*(2n), independent of k. These results call for a vigorous investigation of the complexity-theoretic limitations of exact algorithms.

The authors are Dieter Kratsch, Ramamohan Paturi, Gregory Sorkin, and me.

Organising a Dagstuhl Seminar (Dagstuhl 10441)

Two years ago, at the Dagstuhl seminar 08431: Moderately Exponential Time Algorithms I was asked to join Dieter Kratsch, Gregory Sorkin, and Ramamohan Paturi to propose a new seminar for 2010.

Here’s how that works.

Dagstuhl expects a written proposal for the proposed seminar, see Dagstuhl’s submission guidelines.

By February 2009, we had an early draft of the proposal ready, and polished it in the following weeks. The most rewarding part was writing the scientific motivation, which turned out as a really good overview of the field of exponential time computation as of Winter 2008. I made a separate blog post of it:

Total number of emails in my inbox in 2009 and 2010 in connection with organising Dagstuhl Seminar 10441, November 2010

The other big task is the list of people to invite. Dagstuhl seminars are by invitation, and a preliminary guest list is submitted by the organisers together with the proposal. Dagstuhl has room for 50 people or so, and we were asked to concoct a non-binding list of 80 potential invitees.

We submitted our proposal time, in early April. I can count some 90 emails between the organisers in my inbox for that part of the process. After the Summer 2009, we got a positive reply from Dagstuhl, together with dates (November 2010).

Next task: assemble the true list of 80 invitees, so that the Dagstuhl office could send out the invitations roughly one year in advance. Also, this list should be partitioned into two rounds of 60 and 20 people, respectively. Dagstuhl then sent out the first 60, using the others when they got refusals back peu à peu. All communication was handled conveniently by the very professional Dagstuhl officers, who regularly kept us up-to-date about the invitation status.

Our hope was that everybody on our 80 people list would be invited, but this turned out to be far from the case. Apparently, acceptance was very high, leaving many very good people uninvited.

Dagstuhl-related emails in late 2009: Maybe 20. Letters of invitation arrived in people’s inboxes January 2010. Yes, that’s physical mail. It’s like the Internet, but made out of trees.

In the first half of 2010, I count 52 emails between the organisers, mainly about individual guests, such as people who hadn’t replied, though they told somebody over coffee they’d surely come, etc. By July, we seemed to have 49 acceptances, at which point Dagstuhl would not invite more people.

Most of the remaining organisation tasks, again, were handled by Dagstuhl: Setting up a web page for Dagstuhl Seminar 10441, sending out travel information, etc.

Among the organisers, we started to talk about the seminar format: how many talks, how to select and schedule, etc., leading to a respectable number of 68 more emails. Most notably, the very last days before the meeting became hectic because of a number of last-minute cancellations.

The seminar itself was stress-free, and I really enjoyed myself. There was some extra organising work with respect to scheduling, arranging an excursion, etc., but for almost everything one can rely fully on the very experienced Dagstuhl environment. It feels extremely reassuring that every little situation that comes up has been handled countless times before, be it by the Dagstuhl staff, the local taxi company, or the restaurant owner arranging the wine tasting.

After the meeting, we wrote a brief thank-you note to the attendees. What remains now is some wrap-up work: a final report, list of open problems, etc.

Oh, and think about the next proposal.

The Deafening Silence of Truth

Ryan Williams’s breakthrough result that ACC0 does not contain NEXP was announced on Richard Lipton’s blog on 8 November 2010.

This is exactly 3 months after a superficially similar announcement, Vinay Deolalikar’s attempt to show that P does not contain NP, which created unprecedented hubbub and excitement on the internet. (See my previous post P vs NP vs PR.)

Few researchers believed Deolalikar’s manuscript. By contrast, the result of Williams has been immediately welcomed with universal astonishment and respect. Our jaws collectively dropped. But they dropped in silence:

Number of comments on Richard Lipton’s blog on Deolalikar’s and WIlliams’s manuscripts 3 days after their announcement, respectively

Internet attention does not seem to be a good way of measuring validity or relevance. There may be another message here about philosophy of science: Schopenhauer could hardly have been more wrong when he said,

All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.

Now, back to reading Ryan’s paper. Silently.

Subexponential time run (Dagstuhl 10441)

Returning from the subexponential run wet and happy, from left to right: Mike Saks, Edward Hirsch, Paolo Codenotti, me, Mohan Paturi, and Marek Cygan. Photo by Hans Bodlaender.

I am spending the week at the Dagstuhl meeting 10441: Exact Complexity of NP-hard Problems, which I am also organising.

The site, Schloss Dagstuhl (Dagstuhl @ Wikipedia) is a wonderful place with lots of small details aimed at computer scientists. For example, the street address is Konrad Zuse-Straße. (A full list of Konrad Zuse-Straßen in Germany is maintained by his son, Horst Zuse.)

More whimsically, the marked running trial around Dagstuhl is called n2 [PDF]. As Dagstuhl attendees can attest, the route is pretty difficult despite its relatively short length of 5 kilometers. However, I felt that for a workshop on exponential time computation, polynomial “running times” are not ambitious enough, so I organised a longer run, based on the Bardenbacher-Fels-Weg (13–15 kilometers, depending on which map you consult). This hiking trail is currently labelled WA4 by the Wadern tourist authorities, but I am sure if I get the entire Max-Planck Gesellschaft behind me, I can get it renamed to exp(log2 n), or something similarly attractive.

In spite of a steady drizzle and the competition from attractive alternative hiking and biking outings, five attendees joined me on this, for a very satisfying run. Maybe next time we’ll arrange a half-marathon. Or maybe not.

Alan Turing (talk)

Popular science talk about Alan Turing, father of Computer Science, intellectual giant of the 20th century, persecuted by his own government despite helping to win the war a few years earlier.

The talk describes the intellectual environment for Turing’s work, going back to the formalist endeavours of Russel and Hilbert and culminating in Turing’s definition of the “Univerval Machine,” known today as the Turing machine and the birth of the computer; the cryptoanalytic work of Turing’s group at Bletchley Park for breaking the German coding machine Enigma; and a brief appraisal of the role of Computer Science in today’s technological and scientific reality to establish the importance of Turing’s work.

Further reading

Golden Pointer 2010

I’m immensely proud to have received the 2010 pedagogical prize of D-sektionen, the computer engineering students’ association at Lund University.

Golden Pointer Trophy, Diploma

The award consists of a diploma and the trophy itself, a golden pointer mounted on a wooden board. The word “pointer” refers to (see what I did there?) a teacher’s pointing stick, not to the computing term, even though I suspect that the play on words is deliberate. Close inspection reveals that the pointer is in fact wooden and painted with gold colour. The left part of the mounting board displays plaques with the names of previous recipients:

  • 2005: Görel Hedin
  • 2006: Jan Eric Larsson
  • 2007: Lars Bendix
  • 2009: Andrey Ghulchak
  • 2010: Thore Husfeldt

I’ve been teaching at the computer engineering programme for two years now. The course is EDAF05, an introduction to algorithms design that appears in the fourth term of the computer engineering programme, and elsewhere on other programmes.

Here’s an excerpt from the nomination, in Swedish:

Thore är enligt oss den perfekta läraren, han har kunskapen, han har humorn och han har förmågan att kombinera dessa förmågor i en god och pedagogisk undervisning. Genom hans brinnande intresse för pedagogisk undervisning och sitt egna ämne samt sitt engagemang i att göra det intressant även för människorna i hans omgivning, för han över intresset på de studenter han undervisar.

Han pratar öppet och gärna om sin forskning både under föreläsningar, i pauser och när en student kommer upp till honom och visar sig nyfiken. Han har även förmågan att anpassa nivån när han berättar om sin forskning så den passar de aktuella åhörarna. Att knyta an forskning till utbildning på det vis Thore gör är något som vi anser vara eftersträvansvärt, samt belysandet av att detta är genomförbart anser vi alltid ligga rätt i tiden.

Thore lägger även in flertalet trevliga moment i undervisningen. För att ge ett exempel så tog han ner ett antal studenter under en föreläsning för visa hur algoritmen Stable Marriage fungerar på personerna, i stället för att rita på tavlan.

Han lyckas även få studenterna engagerade i föreläsningarna vilket märks genom att studenterna vågar ställa frågor och även öppet svarar och spekulerar när Thore själv ställer frågor under föreläsningarna. Ett förfarande som uppskattas av många studenter är att han även implementerar de algoritmer han går igenom under föreläsningstid, istället för att använda sig av färdigskriven kod.

Algoritmer kan vara väldigt abstrakt och svårt att greppa, men Thore lyckas genom koppling till praktiska, relevanta och roliga exempel konkretisera ämnet på ett mycket bra vis.

What can I say? I’m really happy for this, thanks to whoever nominated me. I have some ideas for improving this course further, all time-consuming, and now feel doubly inspired to implement them next time.

Teaching remains the part of my job I enjoy (and do) best, but it is unrewarding: not in itself—the fact that somebody listens me, and somebody else pays me for, telling about algorithms is an enormous privilege. But all other parts of my job include external gratification as well: when I invest time and energy in research, I can hope to put a new publication on my cv. When I actually prepare for a departmental meeting, I might take some administrative victory home. But for teaching, no such reward system exists. It’s a task that is removed from meritocratic principles, with no external incentive for excellence.

Except, of course, for awards like this. Which is why they not only are a way to make people like me feel immensely good about themselves. They are also a valuable and influential instrument to improve quality of higher education, provided they are handed out with care. With great power comes great responsibility, so please think long and hard about who should be the next recipient.

IPEC 2010 Accepts (with links)

Here are the papers accepted at The Fifth International Symposium on Parameterized and Exact Computation (IPEC 2010), December 13-15, 2010 at Chennai, India, with links to whatever electronic version I could find. Updates are welcome.

  • Robert Crowston, Gregory Gutin, Mark Jones and Anders Yeo. A New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and its Algorithmic Application [arxiv 1004.0526]
  • Frederic Havet and Leonardo Sampaio. On the Grundy number of a graph
  • Robert Ganian, Petr Hlineny, Joachim Kneis, Daniel Meister, Jan Obdrzalek, Peter Rossmanith and Somnath Sikdar. Are there any good digraph width measures? [arxiv 1004.1485]
  • Daniel Binkele-Raible and Henning Fernau. Enumerate & Measure: Improving Parameter Budget Management
  • Chris Calabro, Impagliazzo Russell and Ramamohan Paturi. On the Exact Complexity of Evaluating Quantified k-CNF
  • Thore Husfeldt and Nina Taslaman. The exponential time complexity of computing the probability that a graph is connected [arxiv 1009.2362]
  • Britta Dorn and Ildikó Schlotter. Multivariate Complexity Analysis of Swap Bribery [PDF at Schlotter’s web page]
  • Katarína Cechlárová and Ildikó Schlotter. Computing the deficiency of housing markets with duplicate houses [PDF at Schlotter’s web page]
  • Marek Cygan, Marcin Pilipczuk, Michal Pilipczuk and Jakub Wojtaszczyk. Improved FPT algorithm and quadratic kernel for Pathwidth One Vertex Deletion
  • Yixin Cao and Jianer Chen. Weighted Cluster Editing: Kernelization based on Edge-Cuts [arxiv 1008.4250]
  • Jesper Nederlof and Johan M. M. van Rooij. Inclusion/Exclusion Branching For Partial Dominating Set and Set Splitting
  • Nadja Betzler, Robert Bredereck and Rolf Niedermeier. Partial Kernelization for Rank Aggregation: Theory and Experiments [PDF at Betzler’s web page]
  • M. Praveen. Small Vertex Cover makes Petri Net Coverability and Boundedness Easier [arxiv 1009.2577]
  • Yngve Villanger. Proper Interval Vertex Deletion
  • Gregory Gutin, Eun Jung Kim, Arezou Soleimanfallah, Stefan Szeider and Anders Yeo. Parameterized Complexity Results for General Factors in Bipartite Graphs with an Application to Constraint Programming
  • Sylvain Guillemot, Christophe Paul and Anthony Perez. On the (non-)existence of polynomial kernels for Pl-free edge modification problems [arxiv 1007.4011]
  • Christian Hoffmann. Exponential Time Complexity of Weighted Counting of Independent Sets [arxiv 1007.1146]
  • Michael Fellows, Serge Gaspers and Frances Rosamond. Parameterizing by the Number of Numbers [arxiv 1007.2021]
  • M A Abhimanyu, B Radheshyam, Chintan Rao H, Venkata Koppula, Neeldhara Misra, Geevarghese Philip and Ramanujan M.S..On the Kernelization Complexity of Colorful Motifs

ICALP 2010 Merchandise and Awards

The rest of the proof is on the back. Mugshot by Rasmus Pagh.

Merchandise

Usually I’m not a big fan of conference merchandise—university branded ball point pens, T-shirts, bags, notepads… But boy, did I change my mind after ICALP 2010!

I saw little reason to unpack the coffee mug we got at the registration desk, and stuffed it in the conference bag. As did everybody else, I think. Only when I came home did I realise that the cup rattled. Was it broken? Intrigued, I opened the box, only to find a piece of chalk! The mug was a blackboard! We could have spent every coffee break eagerly scribbling proofs on our mugs.

Brilliant.

Presburger Award

The Presburger Award was awarded for the first time in at the Bordeaux ICALP. The award goes to a young scientist, and the 2010 recipient is Mikolaj Bojanczyk from Warsaw for his work in automata theory, logic, and algebra.

The EATCS awards session at ICALP included this award and two more: The Gödel prize and the EATCS award. Each recipient gave a talk, and young Mikolaj faced the thankless task of appearing in a session with some of the best speakers in our field, Sanjeev Arora, Joe Mitchell, and Kurt Mehlhorn, who all gave splendid presentations.

Well, Mikolaj’s speech blew me away. Instead of explaining his research contributions, he devoted the talk to the life and work of fellow Varsovian Mojzesz Presburger (1904–1943).

Not only was this graceful, interesting, and moving, it was also extremely well presented. I say this as somebody who obsesses about presentations.

Mikolaj’s visual aids break the slides metaphor we are used to, no matter which medium—35 mm, overhead transparency, computer projector. You can see Mikolaj’s Presburger Award presentation at his website. Check out his other ICALP 2010 presentation as well: his “slides” pan and zoom about on a single, huge, dynamic canvas. I’ve tried to do something similar a few times, but this is better by orders of magnitude.