Our standard way of explaining computational hardness is cognitively contradictory (in wanting to establish a tool for hardness in a realistic model it introduces a concept of easiness in a fictional model), wastes attention by needlessly strengthening a perfectly good scientific hypothesis, and undermines the greatest intellectual triumph of our field by advertising it as an open problem. In short, NP does not belong to my algorithms design class.
Hear me out. I’m thinking aloud here, and may be wrong.
I’m not saying “NP is Wrong” in the sense that “Flat Earth Theory is Wrong.” Instead, I am saying it in the sense that “Pi is Wrong”: NP is a failure of communication, a bad explanation, a mistake of teaching. So what I’m saying is “we shouldn’t teach NP.”
This is a bold claim, and I’m stating it in the starkest way possible. I will defend the position that the main intellectual export of computer science, our crown jewel, the Theory of NP-Completness, does not belong in the introductory algorithms design class.
There is probably a less brutal and inflammatory way of putting this, maybe “Reductions before completeness,” or “it’s in the hardness, stupid!” However, I will make no attempt at that kind of conciliatory rhetoric. My aim here is not to convince anybody, just to make my position clear, if only to myself.
The traditional introduction to computational hardness introduces the class NP, either via nondeterminism or certificate verification, establishes the completeness of Satisfiability for this class under polynomial-time reductions (the Cook–Levin theorem), and proceeds to show the NP-completeness of a number of other problems by establishing membership in NP and reduction from another NP-hard problem. The ordering of this introduction may change, and the adherence to mathematical formalism may change, but the concepts traditionally belong to the same lecture (or block of lectures), the same book chapter, the same exam question.
This is a pedagogical crime of the highest order. Not because of formalism or difficulty. I happen to like formalism, and difficulty is often necessary. I am also not saying “we shouldn’t teach computational hardness.” On the contrary! It’s the greatest story we have.
Instead, the mistakes of the NP-based narrative are that
- it conflates two opposing qualities: a notion of computational easiness (NP) and a notion of computational hardness (reduction from Satisfiability.)
- it conflates a mathematical curiosity (the hard fact that NP is closed under reduction and Satisfiability is complete for this class) with a useful scientific hypothesis (the hypothesis that Satisfiability takes superpolynomial time)
- it conflates the greatest triumph and most useful tool of theoretical computer science (reduction to establish hardness) with a meek admission of epistemological uncertainty (the fact that the status of P vs. NP is an unresolved mathematical question.)
And all of this we communicate in a week! (I can think of even more mistakes, but these are known: terminology, abbreviations, focus on worst-case etc. These are valid criticisms, but not what I want to write about here.)
The first mistake (hard/easy cognitive dissonance)
The first mistake is cognitive. If we want to explain hardness, we shouldn’t spend time on a model for easiness. And certainly not a weird, novel, strange model! The cognitive load of understanding nondeterminism or verification-as-opposed-to-computation is enormous; it’s a completely new way of thinking about problem solving, extremely counter-intuitive.
If you pay attention, I can explain NP to you very well. But your brain will be busy absorbing this for at least a week. I can give you exercises about constructing a polynomial-time verifier. But I won’t have your attention for the story I really wanted to tell you.
Not only because NP is a hard class to understand. But because I really wanted to talk to you about computational hardness. I wanted to tell you about hard problems in a real and completely natural model of computation, namely polynomial, deterministic time. Why on Earth did I think it was a good idea to start out with easy problems in a fictional and contrived model of computation?
(I am not saying “Nondeterministic Turing machines are difficult and boring and we shouldn’t teach them!” I can teach NP just fine by appealing to oracles or verifiers or god-like computers, in a perfectly appealing manner.)
Membership in NP is a notion of easiness in a nonnatural model. But I wanted to teach about hardness in a natural model. This discrepancy puts unnecessary cognitive load on my conversation partner. It also invites extreme misunderstandings. NP-hardness is easily conflated with “something about nondeterministic computation” and legion are the physicists who refer to hard problems as “NP-problems.” That is not their mistake, it is mine.
How I ever thought this was a good idea is beyond me.
The second mistake (maths/science)
I posit that the Claim
“Satisfiability does not admit a worst-case polynomial-time algorithm”
is a hypothesis of the highest scientific dignity. It stands with pride next to “Vases fall down when I let go” or “information cannot be transferred back from protein to either protein or nucleic acid.” It is precise, relevant, falsifiable, consistent with observation, and incredibly useful as an explanation (in casu, my inability to find a polynomial-time algorithm for Hamiltonicity). It has enormous reach.
Unlike most other scientific hypotheses, it also has a rather pedestrian and accessible mathematical theory behind it, where its falsity would in imply a collapse in something called “complexity classes”. If you are a formalist, a mathematician, or a Platonist, you can investigate that theory to strengthen your “belief” in the capital-T “truth” of the hypothesis. It’s quite cute. But it’s not really important for the design of algorithms.
The stance I just described, is the scientific stance. “Scientific” in tradition of Karl Popper. It is not the mathematical stance.
I claim that the scientific stance is vastly superior. (Unless, or course, when you explain computational complexity or the theory of computation to an audience of mathematicians. Then, of course, the scientific stance would be a disastrous mistake. My point here is that the mathematical stance is a disastrous mistake of the same magnitude in all other settings.)
- First, the scientific stance suffices. Everything we want to communicate about computational hardness can be explained using the scientific stance. Elevate the Claim to scientific hypothesis, and you’re done. No need for even introducing NP. Don’t mention the class! (I mentioned it once, but I think I got away with it.)
- Second, the scientific stance is rhetorically stronger. The fact that the Claim has resisted all our attempts at disproving it speaks with clarity and authority. The fact that the Claim happens to be mathematically consistent with a strange concept in an obviously fictional model of computation is iffy.
- Third, the scientific stance is eager to explain. Here’s a suggestion for a modern hardness module for your algorithm class: Give a catchy name to the Claim (maybe, NPTH, non-polynomial time hypothesis). Prove 3-Coloring and Hamiltonicity hard under NPTH, and show that the O(tn) dynamic programming algorithm for Subset Sum we saw recently in class can’t be dramatically improved. Then introduce the obvious strengthening of the Claim, SETH (the Strong Exponential Time Hypothesis). Show that the quadratic-time dynamic programming algorithms for Regular Expression Matching from a few weeks ago is optimal under that hypothesis. Mind = blown. Maybe introduce ETH. And so on. See how exciting this is? So: Do teach hypotheses, and do teach reductions. Don’t teach verification and completeness.
The Claim is a grade-A scientific hypothesis that explains stuff. This is a good story. In contrast, the mathematical theory of polynomial-time reduction and completeness is a footnote in that exciting story, at best. Scientific hypotheses do not derive their value from the mathematical formalism behind them. And since we like math, there is plenty of exciting math in the theory of algorithms that we should tell about instead. The opportunity cost of completeness is very high, and it adds zero value to the status of the Claim as a scientific hypothesis (in the Popperian sense.)
If you are not a Popperian the above section will do nothing for you. Remember that I don’t write to convince you. I write to explain my position.
The third mistake (triumph/open question)
The third objection is the weakest. It’s about storytelling or advertising.
The theory of evolution by change and selection, as laid out in Darwin’s The Origin of Species, is rightly viewed as the best idea anyone has ever had.
Imagine if, in 1860, it were advertised as “one of the biggest open question in biology,” just because it lacked a formal proof.
The fact that “P vs. NP” is an open mathematical question is the least interesting thing we can say about the theory of computational hardness. It does not belong in your algorithms class because it is an irrelevant footnote and a distraction. Worse, it sends the exact wrong message: as if we invite our audience to be wary of the Claim. As if it were in doubt. As if its epistemic status were somehow shaky.
For comparison, the Law of Gravity does not derive its importance from the fact that it can be articulated in mathematics, much less the existence of a formal proof of its truth. In fact, the Law of Gravity cannot be articulated in mathematics. Nor would we be able to prove it, since it is in fact false. Because Einstein. But that is not the most important thing I should tell you if my task was to convince you of the Law of Gravity. Because you are about to let go of a vase.
What I should have done
The Claim that “Satisfiability does not admit a worst-case polynomial-time algorithm” is a perfectly valid scientific hypothesis of enormous explanatory value and reach. It is both nice and Popperian, maybe we should call it the “Nice Popperian” hypothesis about computation, abbreviated to the NP-hypothesis. Or maybe the Nonpolynomial Time Hypothesis (NPTH), in symmetry with the Exponential Time Hypothesis (ETH) or the Strong Exponential Time Hypothesis (SETH). These hypotheses form a great basis for a modern, interesting, exciting introduction to hardness of computation. Such an introduction would be
- about hardness only. All reductions to from a known hard problem (to the left of the blackboard, to the left of the less-than symbol) to a problem whose hardness I want to establish. Various hypotheses describe our current best understanding of Satisfiability, the drosophila of the theory of computation, which always appears on the left. There is only a single model of computation (deterministic), and it’s the same model used in the rest of the class, so it needn’t be introduced.
- a sterling example of the scientific method. Not of engineering or mathematics – both of those are great disciplines or epistemological stances, and we have plenty of classes (software engineering, theory of computation) that adhere to those values.
- speaking with clarity, authority, and pride about one of the greatest and most useful intellectual tool of computer science: to argue about hardness via reduction.
At no point in this should I define the class NP, much less explain what it stands for, and at no point should I introduce nondeterminism or verification-of-certificates. The word completeness is banished from the discourse. The Cook–Levin theorem is never stated, much less proved. “P vs. NP” can’t even be formulated, and its epistemic status is out of scope.
Depending on the course or the audience, these topics might be nice-to-see for a later lecture, maybe mainly to open up the rich literature to the 10% of the students who care. But there are plenty of other things I’m also excited about, so maybe I prefer to speak about the Price of Anarchy or streaming algorithms or convolution using FFT instead. Time and attention is short, and the field of algorithms is full of exciting stuff.
Disclaimer: #notalltextbooks Skiena’s Algorithms Design manual is an example.