Author Archives: thorehusfeldt

Drawing the Grötzsch Graph

The Grötzsch graph is the smallest graph without triangles that cannot be 3-coloured.

I’ve played around with various drawings of it and found the following, which I’m quite happy with:
grötzsch

It’s not the obvious layout; the articles at Mathworld and Wikipedia contain drawings that are more standard and arguably do a better job at visualising the structure of the Grötzsch graph. Another drawing, due to David Eppstein is at at Wikimedia commons.

“My” embedding is a projection on the plane of an embedding of the Grötzsch Graph on the unit sphere:

(I really need to make an animation of this.)

The central vertex of the 2D drawing sits at the top of the sphere; the pentagram-like group of 5 vertices sit slightly above the equator, and the last five vertices (placed no the outer rim in the 2D drawing) are on the southern hemisphere. It turns out that in this embedding, the unit vectors defined by these points are at internal angles of more than 120 degrees, so the embedding describes a vector 3-colouring in the sense of [David R. Karger, Rajeev Motwani, Madhu Sudan: Approximate Graph Coloring by Semidefinite Programming. J. ACM 45(2): 246-265 (1998)]. This shows that the Grötzsch graph (which is 4-chromatic) can be vector 3-coloured.

I find this quite neat.

I found the 3D embedding by actually running an semidefinite programming solver to find a solution to the vector colouring problem. The output of such a program is of course just a bunch of vectors (in 11 dimensions), but it turned out that the solver was friendly enough to only use the first 3 dimensions. After some headscratching and bumbling attempts at visualising the vectors in various mathematical software packages that I understand only feebly, I managed to make sense of the embedding so that I can now draw it neatly.

(I did a similar thing for the Chvátal graph, but I can’t make any sense of the resulting vector 3 colouring.)

2012 in Review: Turing Year

ATY.logo5I had an extremely active year 2012 in terms of popular science. To a large extent that is because it was Alan Turing’s centenary.

Here’s a summary of what I did:

  1. The first event was in April 2012 as part of a yearly Danish science festival (forsk.dk). I give a talk on representations of Alan Turing in contemporary art, Alan Turing i samtidskunst, at the art school ignatius.dk in Copenhagen. Most of the material was based on a previous lecture from 2011 Alan Turing: Man as Machine lecture at Malmö Konsthall in connection with an exhibition of Danish artist Henrik Olesen.
  2. In June, I was honoured to be invited by the good people in Bergen to give the opening lecture on occasion of Turing’s birthday in a lecture series arranged by Bergen University for the Turing centenary. Let us Calculate! – From Leibniz to Turing. The nice Turing film “Codebreaker” (turingfilm.com) was shown right before my talk, so I focussed on Alan Turing’s contribution to our intellectual history instead of the biographical material and Bletchley park. Other speakers in the series were Christos Papadimitriou, Kenneth W. Regan, and Luc Steels; illustrious company indeed!
  3. The Bergen people managed to put a full-page article about Turing in the local paper, and I quickly stole their best idea: explain the idea behind the halting problem in terms of phone applications. This led to a quick blog entry: The Freeze App does not Exist.
  4. A few days later, Danish radio broadcast a long montage (about 40 min) about Alan Turing based on long interviews with me and with Hans Hüttel (Aalborg University). Archived at Radio 24syv. I thought this came out really well.
  5. Lund has a long tradition of weekly public science lectures, the “Technology and Science Circle”. I’m in the steering committee and we agreed to devote the fall to Turing, “Datorernas Darwin: Alan Turing 100 år”. This led to 10 public lectures with a total of 30 appearances (physics, CS, systems biology, gender studies, etc.). Pretty cool. I gave one myself, about the algorithmic lens on information curation (Pagerank, Filterbubble).
  6. In December, Swedish radio broadcast Alan Turing ville bygga en hjärna. Another montage based on an interview with me. This, too, came out well. (Though I find my Swedish toe-curling.) Both radio appearances were very pleasant experiences and were produced by excellent, well-prepared journalists. Swedish Public Radio P1, Vetandes Värld. The interview itself was conducted in early Fall.
  7. Just before year’s end, Danish science webzine videnskab.dk published my reviews of three books related to Turing.

At the very least, there were welcome incentives to read up on Turing. Collective thanks to everybody who turned up, tuned in, emailed, commented, or staid for a chat or beer afterwards.

What’s next? Leibniz year 2016?

Democracy in the Digital Society (talk)

Talk given at the Association of Foreign Affairs, Lund. 22 November 2012.

Abstract

Information technology impacts the foundations of democracy and the open society. For instance, algorithmic selection and filtering of information in search engines (Google) changes the forces underlying access to and control over information. Automatic and personalised news curation (the Filter Bubble) removes the open society’s prerequisite of an agora, i.e., a public forum for discourse. Electronic voting (on the Internet, or even by mobile phone) changes the fundamental ritual of democratic institutions and is open to catastrophic abuse. Collaborative editing systems (Wikipedia, the open source movement) provides technological tools and social contracts for scalable and transparent conflict resolution.

Related information

At least some of the issues covered in the presentation can be pursued using the following references:

  • Wikipedia’s article about PageRank gives a good introduction to how Google’s original algorithm for ranking web pages works, and contains lots of good references for further reading.
  • The most popular source for evangelism about the algorithmic news curation is Eli Pariser’s TED talk Beware online “filter bubbles”, based around his book The Filter Bubble: What the Internet Is Hiding from You, Penguin Press (New York, May 2011) ISBN 978-1-59420-300-8.
  • The meme that “Code is Law” is coined by Lawrence Lessing, see for example Code is Law: On Liberty in Cyberspace in Harvard Magazine (2000) or Lessing’s homepage
  • For more about the idea of programming as a civic virtue, see Douglas Rushkoff’s Program or Be Programmed: Ten Commands for a Digital Age, Soft Skull Press (September 6, 2011).
  • Clay Shirkey’s TED talk How the Internet will (one day) transform government mentions many of the ideas about how to make the legislative process collaborative, scalable, transparent, and accountable by using ideas from the open source community.
  • Democratic models based on graph algorithms for spectral ranking are from Boldi et al., Viscous democracy for social networks, Communications of the ACM, Volume 54 Issue 6, June 2011, pages 129-137.

Turing’s Cathedral (review)

Turing's Cathedral: The Origins of the Digital UniverseTuring’s Cathedral: The Origins of the Digital Universe by George Dyson

The history of the universal electronic computer at the Institute of Advanced Studies in Princeton, pioneered by the leading genius of his time, John von Neumann, and driven largely by the computational requirements of building a nuclear bomb, makes for a good book. George Dyson’s Turing’s Cathedral is not that book.

At his best, Dyson writes compelling, erudite, witty, and idiosyncratic prose with a gift for poetic analogies and elegant turns of phrase. The opening of chapter XVII, on the vast computational power we have today and in the future, is a good example:

Von Neumann made a deal with “the other party” in 1946. The scientists would get the computers, and the military would get their bombs. This seems to have turned out well enough so far, because, contrary to von Neumann’s expectations, it was the computers that exploded, not the bombs.

Alas, these morsels are thinly spread.

Worse, many of them are abject nonsense.

Dyson seems to have no internal error correction mechanisms that shield him from stretching his analogies far beyond what their flimsy fabric can endure. He combines this infatuation with his own literacy with digital, mathematical, and technological illiteracy: a cavalier attitude towards the details of the technology that he aims to describe and reason about.

Search engines and social networks are analog computes of unprecedented scale. Information is being encoded (and operated upon) as continuous (and noise-tolerant) variables such as frequencies (of connection or occurrence_ and the topology of what connects where, with location being increasingly defined by a fault-tolerant template rather than by an unforgiving numerical address. Pulse-frequency coding for the Internet is one way to describe the working architecture of a search engine, and PageRank for neurons is one way to describe the working architecture of the brain. These computational structures use digital components, but the analog computing being performed by the system as a whole exceeds the complexity of the digital code on which it runs. [p. 280]

(The chapter ends with a bold one-liner, “Analog is back, and here to stay.”) It’s unfair to cherry-pick particularly egregious passages from a book about a complicated subject, but the book is full of stuff like that. With the first few instances I tried to adopt a charitable stance and wanted to understand Dyson is trying to tell me, behind the noise of half-understood technical inaccuracies. But after a few chapters I just became annoyed.

Generously put, the technical passages of this book are inspiring, in the sense that I was inspired to actually find out how the ENIAC and other machines worked. Using other sources, such as Wikipedia, because Dyson’s book does very little to tell me. Dyson is clearly out of his depth here, and his confused and confusing descriptions read like the account of a blind man explaining the concept of colour. The result is dense, conceited, and just plain annoying. As Goodreads reviewer Jenny Brown puts it, “this book is fatally marred by Dyson’s failure to understand computer architecture.” Other reviewers of this book, both professional and amateur, seem to be appropriately humbled and impressed by the opaque technology, and write off their confusion to their own cognitive inadequacy. Here’s an example from judy’s review: “I stand in awe of the geniuses who envisioned and constructed the digital universe–largely because I haven’t a prayer of understanding what they did. Although written in plain English, somehow my brain will simply not grasp the concepts.” Well, neither does Dyson’s.

Our message should be that computers are simple. Instead, we get yet another book that makes technology into magic, and its inventors into Byronic heroes.

Which leads us to the biographical sketches. The book gives us a rich anecdotes about many of the leading figures associated with Princeton’s computer group in the 40s and 50s. Bigelow, Ulam, Gödel, — all secondary to the book’s main character, the titanic John von Neumann. Many of these descriptions are entertaining and insightful, it’s clearly the best part of the book. Dyson tells much of the story in the voice of others, by quoting at length from interviews and biographies. This works well. However, even these sketches remain disjointed, erratic, meandering, and quirky. Dyson clearly has had access to unique sources at Princeton’s Institute of Advanced Studies, which make for the most interesting parts of the book. Examples include endlessly entertaining log messages from ENIAC programmers. On the other hand, I really don’t need to know that Baricelli’s had his $1,800 stipend was renewed in 1954. These random facts are many, obviously motivated by the author’s access to their sources, and never play a role in the greater narrative.

Even Alan Turing, after whom Dyson’s book is named, makes an appearance in chapter XIII. Otherwise the book has nothing to do with Turing, and little to do with universal computing, so the book’s title remains a mystery.

Von Neumann’s Cathedral would have been a fine title for this book, or better Von Neumann’s Bomb Bay. This is a book about von Neumann, not Turing, and his monomaniacal dream of building a computer. Von Neumann’s motivation was mainly militaristic: to leverage computational power for simulation of complex phenomena, such as the physics of nuclear bombs. As such, the early history of computing co-evolves at the ENIAC project in Princeton and the Manhattan project in Los Alamos. This is a story worth telling, and from that perspective, Dyson’s book is a book worth reading. Just remember to read the technological passages like the techno-babble in a Star Trek episode: It’s window-dressing, not there in order to be understood.

The final prophetic chapters about the future of computation and mankind are worthless and incompetent.

In summary, a misleadingly-titled, meandering, technologically illiterate, annoying, beautifully written, confused, and sometimes entertaining book about an important topic.

Images for Karger’s algorithm

10 repetitions of the contraction procedure. The 5th repetition finds the minimum cut of size 3.

I presented Karger’s algorithm in class on Monday, and was a bit disappointed by the available illustrations of this beautiful algorithm, and graph contractions in general, in textbooks and on the web.

I think a lot of “free” intuition about an algorithm is gained from just seeing it run, and this particular algorithm is hard to do well on the blackboard. You need a big enough enough instance, which you need to update several times (requiring eraser work and covering you in dust), and then you’d have to repeat the whole thing several times.

So I spent some time drawing these illustrations myself. I implemented the algorithm in Python, using the nifty networkx library. The graphs are quite well laid out using neato from the graphviz package. For each contraction I pinned the other nodes, made the midpoint of the two contracted nodes the new starting point of the new, merged node, and let neato’s embedder move that new node around. (Though I ended up laying out the initial graph by hand, which is just so much nicer.)

Networkx lets you display graphs in mathplotlib, which works moderately well. I even had an animated run for the lecture. Sexy stuff. (But I’ve become somewhat soured on animations. Good, static “animations” of all the steps in an algorithm are much more instructive, a lesson I’ve learned from looking at the amazing illustrations appearing in the textbooks of Robert Sedgewick and Kevin Wayne.)

Mathplotlib doesn’t do curved edges, though. So in the end I wrote a bit more code that transforms a networkx graph (with layout) to a TikZ picture. The result is run trough LaTeX, converted to SVG, and Bob’s your uncle. Or Dave, in this case. Phew.

Uploaded to Wikimedia Commons, free to use and modify with attribution.

By the way, the Wikipedia entry for Karger’s algorithm is not in good shape. (But does contain the images now…)

The Freeze App Does Not Exist

Turing’s proof of the undecidability of the Halting problem explained in terms of phone applications (“apps”), 100 years after his birthday.

Some apps can freeze your phone

There are apps that will freeze your phone, either out of malice or negligence from the app developer. For example, the app might try to perform some undefined mathematical operation, such as dividing 5 by 0. Your maths teacher will have told you that “5/0” does not make sense, and when a modern computer encounters this expression it will indeed crash; in the worst case taking the rest of the operating system with it. Your phone will lock down or “freeze”, and there is nothing else you can do about it, short of restarting the device. After that you are well advised to remove the app.

Why does it have to come so far? Why do such bad apps exist in the first place? Can’t one test for such behaviour beforehand?

The Freeze app


Imagine a really clever software company did just that. Imagine there was an app, Freeze, that would test a suspicious app and determine if it will freeze your phone or not.

Freeze might work by performing experiments of the suspicious app in a simulated environment without actually freezing your phone, or it might deduce the input app’s behaviour from inspecting its source code, — it’s not important for us just how Freeze accomplishes what it promises.

Freeze is really simple to use. First, you select the suspicious app from a list of installed apps on your phone:

Then Freeze does its analysis, which might take a few minutes, and presents you with the result. Nifty.

The Paradox app


As soon as Freeze hits the store, I will build a new app, which I will call Paradox. It will take me all of 3 minutes to develop, because my own programming effort will be tiny. Paradox mainly works by calling Freeze in a nefarious fashion.

Here is the source code for Paradox (in English, not in a real programming language):

  1. Run Freeze and ask it to inspect Paradox.
  2. If Freeze returned “OK” then freeze the phone, for example by computing 5/0.
  3. If Freeze returned “Not OK” then announce “Freeze detected that Paradox freezes” and terminate gracefully.

In a real programming language, the code would look more like this:

set Result to 
    tell application "Freeze" to
        check application "Paradox"
    end tell
if Result is "OK" then
    display 5/0
else display "Freeze detected that Paradox freezes"
end if

That’s it. I’ll publish my Paradox app and invite you to download it to your phone. Go ahead, nothing bad can happen—after all, you have the Freeze app, don’t you?

Why the Freeze app can’t exist

Is the Paradox app malicious?

There are two possibilities. Either Paradox is a malicious app that freezes the phone, or it is not. We will show that both of these cases is a logical impossibility.

Assume Paradox does freeze the phone. Freeze will have detected this, thus, in Paradox’s code, computation proceeds from line 1 to 3 and Paradox will return “Freeze detected that Paradox freezes” and terminate gracefully. The phone did not in fact freeze, contradicting our assumption. Thus, this case is absurd and we can reject it.

So assume that Paradox does not freeze the phone. Freeze will determine this, so Paradox’s computation will proceed to line 2 and freeze the phone. This, again, contradicts our assumption.

Since both of these cases lead to contradictions, we must go back in our argument we conclude that our initial assumption was wrong. (This is called a proof of contradiction.) Thus, an app like Freeze cannot exist.

“There’s no app for that.”

This is Turing’s main result, from his famous 1936 paper. Instead of a phone he used a fictional computational device that we today call a Turing machine. But phones are just special cases of Turing machines.

Acknowledgements

I think this is a great way of explaining Turing’s proof to a general audience. It’s not mine. I saw it two days ago in a Norwegian newspaper, in a splendid article about Alan Turing for his centenary written by Pål Grønås Drange and Jan Arne Telle from the CS department of Bergen University. This is a great piece of popular science writing; kudos for including a proof of the Halting problem!

Shouryya Ray Closing Remarks

This is a brief update on the Shouryya Ray affair mentioned in the previous post on this blog.

As of 5 June 2012, the Wikipedia article for Ray has now been deleted. If you care about how this process works, the discussion is preserved for posterity.

Two professors at TU Dresden have released a statement about the affair,

This splendid document contains a well explained, careful and readable explanation of Ray’s work, accessible to readers with basic knowledge of differential equations. Recommended. The document includes an assessment of the quality of Ray’s contribution.

The work is without doubt exceptional for a high school student and it merits the attention that it received in a national science competition for high school students. […] Given the level of prerequisites that [Ray] had, he made great progress. Nevertheless all his steps are basically known to experts, and we emphasize that he did not solve an open problem posed by Newton.

Of course, none of the involved journalists cares about this anymore, or would be able to absorb the result. Lucky for them, a fresh variant of the meme just hit the anglosphere: This time, a 10-year solved a fundamental mystery of chemistry. (The background is that Sven Hovmöller, a chemistry professor at Stockholm University, has added his son as a co-author of a paper accepted to Philosophical Transactions of the Royal Society A. [Paper at PubMed].)

Shouryya Ray and the Press

Ah, the joys of editing Wikipedia.

Two days ago, I stumbled over interesting news on the web that a German high schooler, Shouryya Ray, had solved a physics problems posed by Newton that had baffled scientists for centuries. These things interest me, so I read the largely uninformative article and began hunting for more information in the amazingly chaotic font of wisdom that is the internet. What was the problem? Why had it been difficult? What was the solution?

The Wikipedia entry, normally the trusted source for that kind of question, repeated the rather vague descriptions found on the media articles. Mostly human interest stuff about Shouryya Ray’s background, early life, ambitions, and views on mathematics. All nicely sourced to the same media articles, which were all obviously copied from each other, with slight increases in hyperbole. By now, (Monday 29 May) the story has reached the so-called Science and Technology web section of Danish tabloid Ekstrabladet. It has not improved.

What did he do?

A thread at Reddit/mathmatics and a thread at StackExchange mathematics seem to have mostly deduced what Shouryya Ray’s result is over the past few days. A pretty neat (implicit) solution to a differential equation for calculating the trajectory of a particle under certain conditions. Follow the links if you’re interested. This is awesome for a 16-year old high school student, far above what I could have done at that age. Clearly university-level calculus.

He submitted this solution to the annual German youth science contest Jugend forscht, won his regional finals for Saxony, and placed 2nd in the national finals for the category Mathematics and Computer Science. Well done.

What the media got out of it

The source of the rampant media story that followed, as far as my Google skills get me, is an article in the online version of Welt (a normally reputable German paper): 16-jähriges Mathegenie löst uraltes Zahlenrätsel. This appears under Vermischtes, the Misc/Human Interest section of the paper. This article is actually still modest on the hyperbole, only using the formulation that the original problem was formulated by Newton (which, as far as I understand, is true) and “seither ungezählte Mathematiker zur Verzweiflung gebracht haben dürfte”. This translates to “should have brought untold numbers of mathematicians to despair”. Note the conjunctive case dürfte. This is carefully expressed and factually correct: the numbers of mathematicians confounded by the resulting differential equations are indeed ungezählt (uncounted, or untold).

The headline, probably set by somebody else than the journalist Céline Lauer is “16 year old mathematical genius solves ancient maths problem.” This is where the story goes off into lala-land and probably where the excitement starts.

Next stop: the story hits the anglosphere, possibly in the Daily Mail online. Schoolboy ‘genius’ solves puzzles posed by Sir Isaac Newton that have baffled mathematicians for 350 years. The conjunctive case, together with any restraint has gone the way of the Dodo. This is a good story, and nobody cares about the original project, it spreads like wildfire between nonmathematical people. Many of the online sources, those that permit comments, contain isolated genuine questions from interested readers about what the result actually was, but nobody gets any wiser.

Wikipedia

What to do? After some soul searching I did the Right Thing and put the Wikipedia article for Shouryya Ray (created three days ago, 27 May) on my watch list, whittling it down to correct, verifiable information. Wikipedia has very well thought out policies about these kinds of things, such as guidelines for Notability of Academics and People notable for only one event.

Of course, once you start doing that, you need to see it through. I’ve been reverting edits from decent, intelligent, friendly, well-intended contributors ever since. All they want is to tell the world about this young genius. And I revert them. It puts me in a foul mood.

At least currently, the page is correct. (Who knows what happened while I typed this.) The comment thread at the Ekstrabladet article, Teenager løste 350 år gammel Newton-gåde, gives me some consolation:

Commenter: that [newspaper] article didn’t tell me much….

Commenter: and why does it say on wikipedia that he got 2nd place???

Commenter: Wikipedia knows EVERYTHING!

[…]

Commenter: Wikipedia does know a bit more than Ekstra-bladet, wouldn’t you agree on that?

What now?

We’ll see where this ends. What I’m worried about is how this may taint the reputation of Shouryya Ray. He did absolutely nothing wrong. He’s clearly very bright and has a valuable career as an academic in front of him. (We can still hope he switches to Theoretical Computer Science, can’t we?)

He fell victim to wanton publicity, to scientifically illiterate journalists, and people who decide what’s newsworthy with their heart instead of their brain. I dearly wish this does not hurt or discourage him.

We need more people like him. And we need fewer bad journalists. And we need more people like the nerds at Reddit. Another good day for internet forums, another bad day for online versions of old media.

Now back to the life-killing task of reverting Wikipedia edits.

Update (29 May, 19:16)

A journalist worth his salt actually wrote a good description of this: 16-year-old’s equations set off buzz over 325-year-old physics puzzler. Alan Boyle at MSNBC. He actually contacted physicists, all of which are naturally reluctant to comment on this. He cites the Reddit thread, too! Good job, Alan. More of that.

Alas, my fears for Shouryya Ray and Jugend forscht are quite valid:

But a falling body with air resistance (however modeled) is hardly a ‘fundamental unsolved problem,’ as [Ray] seems to think.

These comments, and similar ones pop up all over the net now. As if Ray, or his teachers, or the jurors of Jugend forscht made any such hyperbolic claims. As if they were deluded and didn’t do their job right.

But I can’t find any questionable claims from Ray or his teachers or jurors anywhere, on the contrary. In the interviews, Ray is consistently described as very modest. The rot sets in with the Welt article described above, with the dürfte and the sensationalist headline.

This is a story about journalists. Not about Ray, and not about trajectories.

Except for the downward trajectory of the Daily Mail, maybe. Cheap shot, I know.

Maths on the web solved!

Why didn’t anybody tell me?

I remember staring at the first drafts of the MathML definition almost two decades ago and obsessively clicking “reload” for checking the next browser version to implement it. After the web had facilated communication around the globe in various human languages and alphabets, I eagerly awaited support for that single, universal language that made it all work: mathematics.

Alas, it didn’t happen. Since then, I’ve revisited current standards and adoption states for maths on the web every few years. The situation was dismal.

For years, everybody has been able to read, write, and communicate “square root of two” with all kinds of funny squiggles: “roten ur två”, “квадратный корень из двух”, “השורש הריבועי שלשְׁתַּ֫יִם”, “二的平方根”. But try to put the symbol “2” under the radical √ on the web? No way. As the information age progressed, we became unable to communicate in its underlying language, because the web suddenly became the playground of nonmathematical people.

Turns out: instead of bitching about it, somebody solved it. Just like that.

The technology is called MathJax, written in javascript. It produces beautiful, scaling, copyable maths on the web. And it’s been around for a while, apparently. I just didn’t notice it—I seem to work too much and spend too little time procrastinating on the web.

Since recently, it works on Wikipedia! As of May 2012, log in, go to “my preferences”, choose the “appearance” tab, and select MathJax as your default rendering engine for maths at the bottom. Then, go to some page like Tutte Polynomial; and feast your eyes. Increase the font size just because you can!

On the MathJax page you can see which other websites support it. This looks like a very healthy system.

Only grumble: Since it requires javascript, the solution doesn’t work with my current blogging platform. The easiest solution seems to be to host a wordpress.org blog on a separate server. 

Impagliazzo’s Five Worlds on Swedish TV

In March 2012 Swedish TV recorded a popular science lecture of mine. It is now online.

I am an eager public speaker and have several crowd-pleasing talks that I can take off the shelf, say about how Google works. In March, I had the opportunity to record a lecture for Swedish TV’s educational channel Utbildningsradion. That audience is as “general” as it gets, so I wanted it to be broad and accessible. Something everybody immediately relates to. Something concrete and tangible.

What better choice than separation results for nonderministic polynomial-time computation? That’s right! In a fit of misguided ambition I decided to make a 20 minutes talks about honest-to-Karp theory of computation. P versus NP. Also, I did it in my fourth language: Swedish.

Here it is:

I haven’t had the guts to watch it yet, maybe I never will. But as far as I remember, it came out well.

Some comments:

  1. P and NP are the worst complexity class names ever invented, maybe the worst named concepts in all of science. I found it much easier to talk about Impagliazzo’s five worlds and focus on the dichotomy between Cryptomania and Algorithmica. These are meaty, sexy, fuzzy concepts rich with association and meaning. This is what we should talk about to the public. P and NP are just very cold, different, specific, nerdy instantiations of the same basic question.
  2. Puzzles are good when explaining algorithms and their absence. I have a bunch of these talks, using paper folding, sudoku, or (as in this case) tiling puzzles. It gives you something tangible in a talk, and when you talk about something as abstract as computational complexity, you need all the tangibility you can get.
  3. Science Fiction is our friend. I used various tropes from SF to ground the talk in a shared cultural space, and as an underlying theme. Here: the existence of hard artificial intelligence and the Singularity!. (I always say Singularity! in a booming voice. Hence the italics and exclamation point.)
  4. The Singularity focus (sorry, Singularity! focus) was largely inspired by some transhumanistic alarmism on the blogs of Chalmers maths professor Olle Häggström (Okunnigt om artificiell intelligens och mänsklighetens framtid) and fantasy author R. Scott Bakker (The Posthuman Paradox). For some reason, the Singularity! exerts a strange attraction (ha!) on brilliant people like Olle and Scott. I simply can’t get myself worked up about it, and assume it’s simply the result of nonalgorithmic thinking that assails people who work in formal, possibly even quantitative, but nonalgorithmic settings. (Scott is a philosopher, Olle a statistician.) Alternatively, they could be right and I could be wrong. Still, it’s certainly fun to think about, and a great conduit for evangelising about algorithms to the public. The point of the talk is that the Singularity! is morally equivalent to P = NP. I think I manage to convey this point pretty well, given the constraints of the setting. In some sense, this is the maximally ambitious core of the talk. Neither Olle or Scott will be convinced by the argument, but the production values sure outshine their puny blog entries! Proof by intimidation.
  5. I think I made the rhetorically correct decision of leaving the talk by moving the dystopias down to Earth in the end, with a bit of “algorithms are everywhere” internet evangelism.

In summary, the Singularity! is far. But then, the time-travelling nanobots paid me to say that, promising me a simulated eternity filled with cybervixens and Nutella.

Edit to add: I have been informed that I shouldn’t say “NP-hårt” in Swedish, but “NP-svårt.” Thanks, these things are beyond my linguistic intuition.

Related: Views of Impagliazzo’s Five Worlds.