Monthly Archives: July 2014

I sent letters about Cersei’s first sexual relationship! (6)

A cryptic crossword themed after George R. R. Martin’s epic fantasy series A Song of Ice and Fire. This is about as good and polished a cryptic as I can make it. Earlier drafts were run by the Facebook group Cryptic Crossword Society, which generated useful feedback. Enjoy!

Three entries are unclued, they form the puzzle’s solution.

no22-2

[Whole thing as a single PDF for printing]. Solution.

Across
1 Heathen ironborn plundered plowed field. (7)
5 Behaviour of a cruel and terrifying person—I therefore left leaders of Sorrowful Men. (7)
9 Sellsword captain carried goods, having a change of heart towards early retirement. (5)
10 Rev. Spooner escorted laggard to a guardsman sworn to House Lannister. (3,6)
11 Short moments filled with endless lay for relatives. (6,4)
12 Wants to sell daughter’s maidenhead and partially redeem maester. (4)
14 Ned heard two manoeuvres in despair. (11)
17 Reward tenth sworn brother of the Night’s Watch. (6,5)
19 Leave out central promise to Targaryen leader. (4)
20 A few groats and pennies holding unit back before first ride for Ser Duncan at Ashford Meadow, say. (10)
23 Dwarf returned cup-shaped metal object to unknown artisan. (9)
24 Eyrie facing the French is situated in a sheltered position. (7)
25 Hedge knight’s coat of ice is surrounded by expression of surprise. (7)

Down
1 I sent letters about Cersei’s first sexual relationship! (6)
2 Initially festive, until Gwyneth and Cletus Yronwood’s banishment. (6)
3 Return command to Eddard before soldiers become priests. (7,3)
4 Solar remodelled for knight of the Kingsguard. (5)
5 Honor-clad, destroyed seat of a noble house in the Vale of Arryn. (3,6)
6 Noble house: A stag rampant. (4)
7 At home, Tangletongue consumed close friend. (8)
8 Sam mired in a mess, attending the Drowned God. (8)
13 Atop Reek, he jerked Jaime’s junk. (10)
15 Fails to disclose how an area can be fortified? (9)
16 Maesters and the King’s Hand, for example, cover the face after short announcement. (8)
18 House Mallister’s heraldic beast, beheaded in the morning, catches sunlight. (6)
21 Staunch and kingly, after change of allegiance. (5)
22 Bard sounds talented. (4)

Personal reality (Gödel Prize 2014)

This is a quick English translation of a draft I wrote about the Threshold Algorithm of Fagin, Lotem, and Naor, which received the Gödel Prize at ICALP 2014. I’ve run this through Google translate and ironed out the worst grammatical quirks by hand, but the result is far from idiomatic English. My apologies. A slightly abridged version of the Danish text was published in the Danish weekly broadsheet Weekendavisen 2014 #27 from July 4th 2014, in the Science section Idéer, p. 12.

Algorithms. On July 9, the Gödel prize in theoretical computer science at a conference in Copenhagen. This year, the award goes to an algorithm facilitating personalized information retrieval. The existence of this kind of efficient algorithms determines our perception of digital reality.

photo

Information technology has led to two quite radical changes in the way we access information. In the “old days” of the mid-nineties, information was organized like a library. The web service Yahoo! had categorized the web’s pages by topic. If you wanted to find anything about Paris, you had to navigate through a hierarchy—Geography, Europe, France, cities, Paris—just like finding the right shelf in the public library. Google’s page ranking algorithm PageRank from 1998 was a fundamental rethinking of this centuries-old tradition. Now you could write “Paris” into a search field, and immediately got a list of all web pages about Paris, ranked by quality—a little like walking up to the librarian, muttering a single word, and let her do all the lookup for you. Categories and hierarchies were gone! Google’s algorithm was so fundamental that today it seems inconceivable to search the web in any other way than by ranked text search.

The next change came in the late noughties: personalized search results. The term “Paris Hilton” means something completely different to Jürgen, a German tourist in a European city than to Ashley, an American teenager. Therefore, search engines like Google try to adjust the ranked list of answers to the individual user, based on the digital traces we have left. The user’s age and geographical residence is enough to recommend the right hotel for Jürgen and the right celebrity Ashley. Google’s vision is to answer the question before the user has made it. To keep with the library metaphor, this is equivalent to walking up to the librarian and get stuck a book in your hand before you open your mouth.

Personalization is the information reality we live in right now. Google personalises web searches, Amazon personalises Amazon book recommendations, Netflix personalises movies, and Spotify personalises music. Dating services have always been personalised and the large web-based dating sites have obviously jumped the personalisation bandwagon as well. Personalization has many interesting potential consequences, such as the creation of so-called potential filter bubbles in information dissemination, which WA has covered earlier [It’s all about you, WA 20, 2011].

The algorithms involved in making personalisation work have a direct and significant impact on how we perceive the world—for better or worse. One of them, the Threshold Algorithm for top-k queries by Ron Fagin, Amnon Lotem, and Moni Naor will receive theoretical Computer Science’s most distinguished honour, the Gödel Prize, at a conference in Copenhagen.


While algorithms take a lot of choices for us and know more and more about us, the public knows precious little about algorithms. However, an algorithm is neither artificial intelligence, black magic, or “what a programmer says when he does not want to explain what happens.” Algorithms are problem-solving methods, expressed in terms so precise that it can be performed by a computer—which you can think of as an infinitely patient, obedient, and annoyingly literal-minded twelve-year-old.

Before the personalisation wave from 2010, a search service could content itself with sorting information by just one parameter. For example, a bestseller list ranks books by how many books were sold. But personalisation opens the possibility to rank and categorize books by price, age, number of pages, language, genre, etc. If a book was written by a Danish author may be very relevant to Mrs Hansen is totally irrelevant to Mrs Jensen. And Mrs Hansen may hate crime novels while Mrs Jensen loves them. Personalisation is therefore implemented as a weighting of the product’s many parameters, with the weighting tailored to Mrs Jensen.

But how do we now keep our books when each user has his or her own view of what is relevant? In principle, this is very easy to solve: Run through all books, calculate Mrs Jensen’s score for each of them, sort the results by score, and present her with the top 20 results.

Unfortunately, this takes too long.

This bears repeating, because it is contrary to our intuition: Sorting takes too long.

How can this be? Aren’t computers becoming faster and faster? Yes, they are. Technological developments increased processor speeds dramatically since Princeton mathematician John von Neumann’s ENIAC from the end of World War II, which weighed 30 tons and consisted of 17468 vacuum tubes. But at the same time, data volume has also grown bigger and bigger. In fact, computer memory and problem size have grown even faster than the processor speed. You could say that over time, computers have gotten slower and slower compared to the computational problems we expose them to. As a rule of thumb, “it takes a computer a second to touch all data” has been true then and now. Thus, the algorithmic challenge is an aspect big data: we barely have time to answer even the simplest questions on very large data volumes.

Of course, one might say that a second may not be such a long time anyway. Mrs Jensen would reasonably be expected to wait a second until we have reorganised the entire library according to her preferences. But the problem is that our hypothetical web service hopes to have more customers than just Mrs Jensen; we would like to serve more than one user per second. In fact, we hope for a few million per second. And we would not want to invest in a few million computers to serve these customers.

On the other hand, we need to perform the calculation immediately, because there are too many different customers for us to prepare a personalised copy of the entire library for each of them. That approach would require a copy of the entire data set for each user, and we don’t have that much storage space.

In summary, the algorithmic problem occurs because the data set is too large to sort it straight away, and there are too many customers to sort everything in advance.


Without such an algorithmic solution to this problem there would be no personalisation. In fact there are many computational problems, where exactly this occurs: algorithms research has failed to find a clever solution. Certain desirable algorithms simply do not exist. Sometimes, this fact of nonexistence can also be proved.
The Austrian logician Kurt Gödel’s own research in the 1930s, long before Computer Science was its own discipline, focused precisely on stringent mathematical arguments for the absence of algorithms for specific problems.

But it turns out that four our problem at hand, an efficient algorithm does, called the Threshold Algorithm. It is short and elegant, and you try to come up with it yourself of google it. The elegant solution requires an data structure consisting of the entire database sorted for each parameter individually, as well as some additional bookkeeping and cross-references between the lists. In these sorted lists a few steps suffice to find the answer. The algorithm is easy to follow and easy to implement as program code. The analysis of correctness and complexity requires a little more computer science background.

Theoretical computer science is a mathematical discipline, so the algorithm is published along with a rigorous analysis of efficiency and a formal proof of optimality. The Threshold Algorithm was discovered and published in 2001 by three Israeli and American researchers.
The Gödel Prize is an award in theoretical computer science, so it has probably been decisive that the mathematical aspects of the outcome meets the aesthetic requirements of Mathematics for simplicity. The Threshold Algorithm can be described in 10 lines, and the authors were originally worried about getting the manuscript rejected because it looked too easy. This did not happen. The article was immediately admitted to the ACM Symposium on Principles of Database Systems in 2001, won the best paper award, won the symposium’s Test-of-Time Award 10 years later and now the Gödel Prize.

Many algorithms in the computer science literature are pure basic research and never find their way to a product, but the Threshold Algorithm is a good example of one of the now many pieces of theoretical computer science that places a direct influence on our reality, as reality becomes more and more algorithmic. They choose our next book, our next news and perhaps our next relationship.

Gödelprisen given annually since 1993 for outstanding achievements in theoretical computer science. The award ceremony will take place alternately between North America and Europe.

Fagin, Naor, Lotem, Optimal aggregation algorithms for middleware, J. Comput. Syst. Sc. 66 (2003) 614-656. [PDF]

Den personlige virkelighed

En lettere forkortet udgave af denne tekst er publiceret i Weekendavisen 2014 #27 fra d. 4. juli 2014 i sektionen Idéer, s. 12.

Algoritmer. Den 9. juli uddeles Gödelprisen for teoretisk datalogi ved en konference i København. I år går prisen til et en algoritme som blandt andet muliggør personaliseret informationssøgning. Eksistensen af den slags effektive algoritmer bestemmer, hvordan den digitale virkelighed ser ud for os.

photo

Informationsteknologien har medført to helt gennemgribende ændringer i vores måde at tilgå information. I »gamle dage« i midten af halvfemserne var information organiseret som et bibliotek. Tjenesten Yahoo! havde kategoriseret webbens sider efter emne. Ville man finde noget om Paris, så var man nødt til at klikke sig igennem et hierarki – geografi, Europa, Frankrig, byer, Paris – ganske som at finde den rette hylde i kommunebiblioteket. Googles siderangeringsalgoritme »Pagerank« fra 1998 indebar en fundamental nytænkning af denne århundreder gamle tradition. Nu kunne man skrive »Paris« ind i et søgefelt, og straks fik vi en liste med alle websider om Paris, sorteret efter kvalitet – lidt som at gå op til bibliotekaren, mumle et enkelt ord, og lade hende gøre alt opslagsarbejdet. Kategorierne var borte og Googles algoritme blev så fundamental, at det i dag for det fleste virker utænkeligt at gennemsøge webben på nogen anden måde end ved rangeret fritekstsøgning.

Den næste ændring kom først i slutningen af nullerne: personaliserede søgeresultateter. Opslaget »Hilton Paris« betyder nemlig noget helt andet for Jürgen, en tysk turist i en europæisk storby, end for Ashley, en amerikansk teenager. Derfor forsøger søgemaskiner som Google at tilpasse den sorterede liste af svar til den individuelle bruger, baseret på de digitale spor, vi har efterladt. Brugerens alder og geografisk opholdssted er nok til at anbefale det rigtige hotel til Jürgen og den rigtige kendis til Ashley. Googles vision er at besvare spørgsmålet inden brugeren har stillet det. For at blive i biblioteksmetaforen svarer det til at gå op til bibliotekaren og få stukket en bog i hånden, før man åbner munden.

Personalisering er den informationvirkelighed, vi lever i lige nu. Google personalerer websøgninger, Amazon personaliserer boganbefalinger, Netflix film og Spotify musik. Partnerformidling har altid været personaliseret, og de store elektroniske dating sites er også selvfølgeligt med på den vogn. Personalisering har en hel række interessante potentielle konsekvenser, fx skabelsen af såkaldte potentielle Filterbobler i informationsformidling, som WA tidligere har belyst [Det hele handler om dig, WA 20, 2011].

De algoritmer, der indgår i at få personalisering til at virke, har altså direkte og mærkbare konsekvenser for, hvordan vi opfatter verden – på godt og ondt. En af dem, tærskelalgoritmen for top-k-søgninger af Ron Fagin, Amnon Lotem og Moni Naor modtager i disse dage den teoretiske datalogis fornemste udmærkelse, Gödelprisen, ved en konference i København.


Samtidigt med at algoritmerne træffer en masse valg for os og véd mere og mere om os, véd vi temmelig lidt om algoritmer. En algoritme er nu hverken kunstig intelligens, sort magi eller »hvad en programmør siger, når han ikke har lyst til at forklare, hvad der sker.« Algoritmer er problemløsningsmetoder, udtrykt så præcise, at det kan udføres af en computer – som man kan tænke på som en uendelig tålmodig, lydig og irriterende ordret tolvårig.

Inden personaliseringsbølgen fra 2010 kunne en søgetjeneste nøjes med at sortere information efter bare én parameter. En bestsellerliste kunne fx rangordne bøger efter mest solgte. Men personaliseringen åbner muligheden at rangere og kategorisere bøger efter pris, alder, antal sider, sprog, genre, osv.
Om en bestemt bog er skrevet af en dansk forfatter kan være meget relevant for fru Hansen men helt irrelevant for fru Jensen. Og fru Jensen hader måske krimier mens fru Jensen elsker dem. Personalisering er altså implementeret som en vægtning af produktets mange parametre, og denne vægtning er skræddersyet til netop fru Jensen.

Men hvordan holder nu vores bøger sorteret, når hver bruger har sin egen opfattelse af, hvad der er relevant? I princippet et det meget nemt at løse: Gennemløb alle bogtitler, beregn fru Jensens score for hver af dem, sortér resultatet efter score, og præsentér de 20 bedste resultater for hende.

Desværre tager det for lang tid.

Det skal man måske lige gentage, fordi det strider imod vores intuition: Det tager for lang tid at sortere.

Hvordan kan det være? Bliver datamater ikke hurtigere og hurtigere? Jo, det gør de skam. Den teknologiske udvikling har selvfølgeligt gjort processorhastigheder meget hurtigere siden Princeton-matematikeren John von Neumanns »ENIAC« fra slutningen af Anden Verdenskrig, som vejede 30 tons og bestod af 17468 radiorør. Men samtidigt er datamængder også blevet større og større. Maskinens hukommelse og problemernes størrelse er vokset endnu hurtigere en processorhastigheden. Vil man sætte det på en spids kan man sige, at computere bliver langsommere og langsommere i forhold til de beregningsproblemer, vi udsætter dem for. Men som en tommelfingerregel gælder “det tager en computer et sekund at berøre hele datamængden” før som nu. Den udfordring er altså et aspekt ved det vil kalder massedata, big data: at vi knapt nok har tid til at besvare endog de mest enkle spørgsmål til meget store datamængder.

Nu kunne man måske sige, at et sekund måske ikke er så voldsomt lang tid alligevel. Fru Jensen vil nok godt kunne vente, til vi har omorganiseret hele bibioteket, så det passer til netop hendes præferencer. Problemet er bare, at vi også håber, at der er flere kunder i butikken end fru Jensen; vi vil gerne servicere mere en én bruger per sekund. Helst et par millioner per sekund. Og vi vil nødigt investere i et par millioner computere til at betjene disse kunder.

Vi skal gennemføre beregning her og nu, fordi der er for mange forskellige kunder til at vi kan forberede en kopi af hele biblioteket til hver af dem. Det skulle tage en kopi af hele datamængden for hver bruger, og så meget lagerplads har vi ikke.

Det algoritmiske problem opstår altså fordi vi skal håndtere for store datamængder til at sortere alting på stående fod og for mange kunder til at sortere alting på forhånd.


Uden sådan en algoritmisk løsning til dette problem ingen personalisering. Der findes forresten rigtig mange beregningsproblemer, hvor vi er mindre heldige, og hvor det ikke er lykkedes algoritmeforskningen at finde en snedig løsning. Visse ønskværdige algoritmer findes ganske enkelt ikke. Det kan man så også bevise.
Den østrigske logikers Gödels egen forskning i 1930erne, længe inden datalogi var blevet en egen disciplin, drejede sig netop om a bevise fraværet af algoritmer for bestemte problemer.

Men det viser sig altså, at en sådan algoritme faktisk findes, nemlig tærskelalgoritmen. Den er kort og elegant, og man kan more sig med selv at komme på den eller google efter svaret på nettet. Den elegante løsning kræver en sindrig datastruktur, hvor man i forvejen har sorteret hele databasen for hver enkelt parameter for sig, samt nogle ekstra indholdsfortegnelser med krydshenvisninger mellem listerne. I disse sorterede lister kan man så nøjes med ganske få opslag. Selve algoritmen er nem at følge og let at implementere som programkode, hvorimod analysen for korrekthed og effektivitet kræver lidt mere datalogisk baggrundsviden.

Teoretisk datalogi er en matematisk disciplin, så algoritmen som forskningsresultat publiceres sammen med en stringent analyse af dens effektivitet og et formelt bevis af dens optimalitet. Tærskelalgoritmen blev opdaget og publiceret i 2001 af tre israelske og amerikanske forskere, som derfor modtager Gödelprisen i år.
Gödelprisen er en udmærkelse inden for teoretisk datalogi, så det har sikkert også været udslagsgivende, at de matematiske aspekter af resultatet opfylder matematikkens æstetiske krav til enkelhed. Selve tærskelalgoritmen kan beskrives på 10 linjer, og forfatterne bag resultatet var i sin tid alvorligt bekymrede for at få manuskriptet afvist, fordi det så for let ud. Sådan gik det dog ikke. Artiklen blev dog straks optaget på ACM Symposium on Principles of Database Systems i 2001, vandt prisen for bedste artikel, vandt symposiets Test-of-Time Award 10 år senere og får nu altså Gödelprisen.

Mange algoritmer i den datalogiske literatur er ren grundforskning og finder aldrig vej til et produkt, men tærskelalgoritmen er et godt eksempel på et af de efterhånden mange stykker teoretisk datalogi som sætter et umiddelbart præg på vores virkelighed, efterhånden som den virkelighed bliver mere og mere algoritmisk. Den vælger vores næste bog, vores næste nyhed og måske vores næste parforhold.

Gödelprisen uddeles årligt siden 1993 for fremragende resultater i teoretisk datalogi. Prisoverrækkelsen finder sted på skift mellem Nordamerika og Europa.

Fagin, Naor, Lotem, Optimal aggregation algorithms for middleware, J. Comput. Syst. Sc. 66 (2003) 614–656. PDF

Summary

This is a Danish popular science introduction to the threshold algorithm of Fagin, Lotem, and Naor, which received the Gödel prize at ICALP 2014.
A shortened form of this manuscript appeared in the Danish weekly broadsheet Weekendavisen.