Category Archives: Uncategorized

Superintelligence in SF. Part III: Aftermaths

Part II of a 3-part summary of a 2018 workshop on Superintelligence in SF. See also [Part I: Pathways] and [Part II: Failures].

The third and final part of apocalyptic taxonomy describes the outcome, or aftermath, of the emergence and liberation of artificial superintelligence. The list of scenarios is taken from Tegmark (2018). In my introductory slide I tried to roughly order these scenarios along two axes, depending on the capability of the superintelligence, and the degree of control.

 

Tegmark.001

These scenarios are not clearly delineated, nor are they comprehensive. There is a longer description in Chapter 5 of Tegmark (2018). Another summary is at AI Aftermath Scenarios at the Future of Life Institute blog, where you can also find the results of a survey about which scenario to prefer.


Collapse

Intelligent life on Earth becomes extinct before a superintelligence is ever developed because civilisation brings about its own demise by other means than the AI apocalypse.

Reversion

Society has chosen deliberate technological regression, so as to forever forestall the development of superintelligence. In particular, people have abandoned and outlawed research and development in relevant technologies, including many discoveries from the industrial and digital age, possibly even the scientific method. This decision be in reaction to a previous near-catastrophic experience with such technology.

Turing Police

Superintelligence has not been developed, and societies have strict control mechanisms that prevent research and development into relevant technologies. This may be enforced by a totalitarian state using the police or universal surveillance.

Tegmark’s own label for this scenario is “1984,” which was universally rejected by the workshop.

Egalitarian Utopia

Society includes humans, some of which are technologically modified, and uploads.
The potential conflict arising from productivity differentials between these groups are avoided by abolishing property rights.

Libertarian Utopia

Society includes humans, some of which may be technologically modified, and uploads. Biological life and machine life have segregated into different zones. The economy is almost entirely driven by the fantastically more efficient uploads. Biological humans peacefully coexist with these zones, benefit from trading with machine zones; the economic, technological, and scientific output of humans is irrelevant.

Gatekeeper AI

A single superintelligence has been designed. The value alignment problem has been resolved in the direction that the superintelligence has one single goal: to prevent the second superingelligence, and to interfere as little as possible with human affairs. This scenario differs from the Turing police scenario in the number of superintellinces actually constructed (0 versus 1) and need not be a police state.

Descendants

The superintelligence has come about by a gradual modification of modern humans. Thus, there is no conflict between the factions of “existing biological humans” and “the superintelligence” – the latter is simply the descendant life form of the former. “They” are “we” or rather, “our children.” 21st century homo sapiens is long extinct, voluntarily, just as each generation of parents faces extinction.

Enslaved God

The remaining scenarios all assume a superingelligence of vastly superhuman intellect. They differ in how much humans are “in control.”

In the Enslaved God scenario, the safety problems for developing superintelligence (control, value alignment) have been solved. The superingelligence is a willing, benevolent, and competent servant to its human masters.

Protector God

The superintelligence weilds significant power, but remains friendly and discreet, nudging humanity unnoticably into the right direction without being too obvious about it. Humans retain an illusion of control, their lives remaing challenging and feel meaningful.

Benevolent Dictator

The superintelligence is in control, and openly so. The value alignment problem is solved in humanity’s favour, and the superintelligence ensures human flourishing. People are content and entertained. Their lives are free of hardship or even challenge.

Zookeeper

The omnipotent superintelligence ensures that humans are fed and safe, maybe even healthy. Human lives are comparable to those of zoo animals, they feel unfree, may be enslaved, and are significantly less happy that modern humans.

Conquerors

The superintelligence has not kept humans around. Humanity is extinct and has left no trace.


Oz

Workshop participants quickly observed the large empty space in the lower left corner! In that corner, no superintelligence has been developed, yet the (imagined) superintelligence would be in control.


Other fictional AI tropes are out of scope. In particular the development of indentured mundane artificial intelligences, which may outperform humans in specific cognitive tasks (such as C3P0s language facility or many space ship computers), without otherwise exhibiting superior reasoning skills.

 

NP is Wrong

Our standard way of explaining computational hardness is cognitively contradictory (in wanting to establish a tool for hardness in a realistic model it introduces a concept of  easiness in a fictional model), wastes attention by needlessly strengthening a perfectly good scientific hypothesis, and undermines the greatest intellectual triumph of our field by advertising it as an open problem. In short, NP does not belong to my algorithms design class.

Hear me out. I’m thinking aloud here, and may be wrong.

440px-Karl_Popper

Not Plato (NP).

I’m not saying “NP is Wrong” in the sense that “Flat Earth Theory is Wrong.” Instead, I am saying it in the sense that “Pi is Wrong”: NP is a failure of communication, a bad explanation, a mistake of teaching. So what I’m saying is “we shouldn’t teach NP.”

This is a bold claim, and I’m stating it in the starkest way possible. I will defend the position that  the main intellectual export of computer science, our crown jewel, the Theory of NP-Completness, does not belong in the introductory algorithms design class.

There is probably a less brutal and inflammatory way of putting this, maybe “Reductions before completeness,” or “it’s in the hardness, stupid!” However, I will make no attempt at that kind of conciliatory rhetoric. My aim here is not to convince anybody, just to make my position clear, if only to myself.

Our mistakes

The traditional introduction to computational hardness introduces the class NP, either via nondeterminism or certificate verification, establishes the completeness of Satisfiability for this class under polynomial-time reductions (the Cook–Levin theorem), and proceeds to show the NP-completeness of a number of other problems by establishing membership in NP and reduction from another NP-hard problem. The ordering of this introduction may change, and the adherence to mathematical formalism may change, but the concepts traditionally belong to the same lecture (or block of lectures), the same book chapter, the same exam question.

This is a pedagogical crime of the highest order. Not because of formalism or difficulty. I happen to like formalism, and difficulty is often necessary. I am also not saying “we shouldn’t teach computational hardness.” On the contrary! It’s the greatest story we have.

Instead, the mistakes of the NP-based narrative are that

  1. it conflates two opposing qualities: a notion of computational easiness (NP) and a notion of computational hardness (reduction from Satisfiability.)
  2. it conflates a mathematical curiosity (the hard fact that NP is closed under reduction and  Satisfiability is complete for this class) with a useful scientific hypothesis (the hypothesis that Satisfiability takes superpolynomial time)
  3. it conflates the greatest triumph and most useful tool of theoretical computer science (reduction to establish hardness) with a meek admission of epistemological uncertainty (the fact that the status of P vs. NP is an unresolved mathematical question.)

And all of this we communicate in a week! (I can think of even more mistakes, but these are known: terminology, abbreviations, focus on worst-case etc. These are valid criticisms, but not what I want to write about here.)

The first mistake (hard/easy cognitive dissonance)

The first mistake is cognitive. If we want to explain hardness, we shouldn’t spend time on a model for easiness. And certainly not a weird, novel, strange model! The cognitive load of understanding nondeterminism or verification-as-opposed-to-computation is enormous; it’s a completely new way of thinking about problem solving, extremely counter-intuitive.
If you pay attention, I can explain NP to you very well. But your brain will be busy absorbing this for at least a week. I can give you exercises about constructing a polynomial-time verifier. But I won’t have your attention for the story I really wanted to tell you.

Not only because NP is a hard class to understand. But because I really wanted to talk to you about computational hardness. I wanted to tell you about hard problems in a real and completely natural model of computation, namely polynomial, deterministic time. Why on Earth did I think it was a good idea to start out with easy problems  in a fictional and contrived model of computation?

(I am not saying “Nondeterministic Turing machines are difficult and boring and we shouldn’t teach them!” I can teach NP just fine by appealing to oracles or verifiers or god-like computers, in a perfectly appealing manner.)

Membership in NP is a notion of easiness in a nonnatural model. But I wanted to teach about hardness in a natural model. This discrepancy puts unnecessary cognitive load on my conversation partner. It also invites extreme misunderstandings. NP-hardness is easily conflated with “something about nondeterministic computation” and legion are the physicists who refer to hard problems as “NP-problems.” That is not their mistake, it is mine.

How I ever thought this was a good idea is beyond me.

The second mistake (maths/science)

I posit that the Claim

“Satisfiability does not admit a worst-case polynomial-time algorithm”

is a hypothesis of the highest scientific dignity. It stands with pride next to “Vases fall down when I let go” or “information cannot be transferred back from protein to either protein or nucleic acid.” It is precise, relevant, falsifiable, consistent with observation, and incredibly useful as an explanation  (in casu, my inability to find a polynomial-time algorithm for Hamiltonicity). It has enormous reach.

Unlike most other scientific hypotheses, it also has a rather pedestrian and accessible mathematical theory behind it, where its falsity would in imply a collapse in something called “complexity classes”. If you are a formalist, a mathematician, or a Platonist, you can investigate that theory to strengthen your “belief” in the capital-T “truth” of the hypothesis. It’s quite cute. But it’s not really important for the design of algorithms.

The stance I just described, is the scientific stance. “Scientific” in tradition of Karl Popper. It is not the mathematical stance.

I claim that the scientific stance is vastly superior. (Unless, or course, when you explain computational complexity or the theory of computation to an audience of mathematicians. Then, of course, the scientific stance would be a disastrous mistake. My point here is that the mathematical stance is a disastrous mistake of the same magnitude in all other settings.)

  1. First, the scientific stance suffices. Everything we want to communicate about computational hardness can be explained using the scientific stance. Elevate the Claim to scientific hypothesis, and you’re done. No need for even introducing NP. Don’t mention the class! (I mentioned it once, but I think I got away with it.)
  2. Second, the scientific stance is rhetorically stronger. The fact that the Claim has resisted all our attempts at disproving it speaks with clarity and authority. The fact that the Claim happens to be mathematically consistent with a strange concept in an obviously fictional model of computation is iffy.
  3. Third, the scientific stance is eager to explain. Here’s a suggestion for a modern hardness module for your algorithm class: Give a catchy name to the Claim (maybe, NPTH, non-polynomial time hypothesis). Prove 3-Coloring and Hamiltonicity hard under NPTH, and show that the O(tn) dynamic programming algorithm for Subset Sum we saw recently in class can’t be dramatically improved. Then introduce the obvious strengthening of the Claim, SETH (the Strong Exponential Time Hypothesis). Show that the quadratic-time dynamic programming algorithms for Regular Expression Matching from a few weeks ago is optimal under that hypothesis. Mind = blown. Maybe introduce ETH. And so on. See how exciting this is? So: Do teach hypotheses, and do teach reductions. Don’t teach verification and completeness.

The Claim is a grade-A scientific hypothesis that explains stuff. This is a good story. In contrast, the mathematical theory of polynomial-time reduction and completeness is a footnote in that exciting story, at best. Scientific hypotheses do not derive their value from the mathematical formalism behind them. And since we like math, there is plenty of exciting math in the theory of algorithms that we should tell about instead. The opportunity cost of completeness is very high, and it adds zero value to the status of the Claim as a scientific hypothesis (in the Popperian sense.)

If you are not a Popperian  the above section will do nothing for you. Remember that I don’t write to convince you. I write to explain my position.

The third mistake (triumph/open question)

The third objection is the weakest. It’s about storytelling or advertising.

The theory of evolution by change and selection, as laid out in Darwin’s The Origin of Species, is rightly viewed as the best idea anyone has ever had.

Imagine if, in 1860, it were advertised as “one of the biggest open question in biology,” just because it lacked a formal proof.

The fact that “P vs. NP” is an open mathematical question is the least interesting thing we can say about the theory of computational hardness. It does not belong in your algorithms class because it is an irrelevant footnote and a distraction. Worse, it sends the exact wrong message: as if we invite our audience to be wary of the Claim. As if it were in doubt. As if its epistemic status were somehow shaky.

For comparison, the Law of Gravity does not derive its importance from the fact that it can be articulated in mathematics, much less the existence of a formal proof of its truth. In fact, the Law of Gravity cannot be articulated in mathematics. Nor would we be able to prove it, since it is in fact false. Because Einstein. But that is not the most important thing I should tell you if my task was to convince you of the Law of Gravity. Because you are about to let go of a vase.

What I should have done

The Claim that “Satisfiability does not admit a worst-case polynomial-time algorithm” is a perfectly valid scientific hypothesis of enormous explanatory value and reach. It is both nice and Popperian, maybe we should call it the “Nice Popperian” hypothesis about computation, abbreviated to the NP-hypothesis. Or maybe the Nonpolynomial Time Hypothesis (NPTH), in symmetry with the Exponential Time Hypothesis (ETH) or the Strong Exponential Time Hypothesis (SETH). These hypotheses form a great basis for a modern, interesting, exciting introduction to hardness of computation. Such an introduction would be

  1. about hardness only. All reductions to from a known hard problem (to the left of the blackboard, to the left of the less-than symbol) to a problem whose hardness I want to establish. Various hypotheses describe our current best understanding of Satisfiability, the drosophila of the theory of computation, which always appears on the left. There is only a single model of computation (deterministic), and it’s the same model used in the rest of the class, so it needn’t be introduced.
  2. a sterling example of the scientific method. Not of engineering or mathematics – both of those are great disciplines or epistemological stances, and we have plenty of classes (software engineering, theory of computation) that adhere to those values.
  3. speaking with clarity, authority, and pride about one of the  greatest and most useful intellectual tool of computer science: to argue about hardness via reduction.

At no point in this should I define the class NP, much less explain what it stands for, and at no point should I introduce nondeterminism or verification-of-certificates. The word completeness is banished from the discourse. The Cook–Levin theorem is never stated, much less proved. “P vs. NP” can’t even be formulated, and its epistemic status is out of scope.

Depending on the course or the audience, these topics might be nice-to-see for a later lecture, maybe mainly to open up the rich literature to the 10% of the students who care. But there are plenty of other things I’m also excited about, so maybe I prefer to speak about the Price of Anarchy or streaming algorithms or convolution using FFT instead. Time and attention is short, and the field of algorithms is full of exciting stuff.

Disclaimer: #notalltextbooks Skiena’s Algorithms Design manual is an example.

EATCS Council Election 2017 Statement

I am delighted to be nominated for the EATCS Council.

My main focus in the EATCS’s flagship conference ICALP. I bring to the table some experience from interacting with the council as the General Chair of ICALP 2014 (in Copenhagen). I have also served on the ICALP PC, served as a keynote speaker, and received a Best Paper award.

My main ambition is to increase the quality of ICALP, and the main instrument would be to

recognise that the goals of ICALP and the goals of the EATCS are not always aligned.

In my experience, these misaligned goals lead to value conflicts that the EATCS Council has difficulties in confronting.

If elected to the council, I would like to investigate the possibilities to

give ICALP a Steering Committee.

Outside of ICALP-related matters, my main areas of interest are how the mathematical discipline of theoretical computer science can interact with its surroundings using a stance of  intellectual honesty, curiosity, and confidence. This context includes intellectually challenging questions from empirical considerations, the other sciences, industry, society, and the public—these questions inform my position on outreach and science communication.

Full list of nominees, brilliant and decent people all: http://eatcs.org/index.php/component/content/article/1-news/2534-nomination-of-candidates-for-the-eatcs-council-election-2017

Vote at http://eatcs.org before October 10, 2017.

Secret Sartre update: Bret Weinstein, Jordan Peterson now playable

The long awaited update for Secret Sartre, the deconstructive social deduction game of postmodernism on campus is finally here! Reality has managed to unlock Jordan Peterson and Bret Weinstein as fully playable characters.

To use them, remove any of Darwin, Dennett, Dawkins, Russel, or Chomsky (but not Curie, who is token) from the original Science decks, and add the Peterson and Weinstein cards.

You can now recreate a faculty meeting at Evergreen State College in the safe confines of your home!

weinson

Orthogonal Vectors to Perl Regex

The Orthogonal Vectors problem is given two sets of d-dimensional 01-vectors to decide if there is an orthogonal pair: two vectors, one from each set, with zero dot product. Obviously, the problem can be decided in time O(n2d) by trying all pairs.

Here’s the a complete reduction from the Orthogonal Vectors problem to evaluation of Perl regular expressions.

chomp($_ = <STDIN>);
s/0/\\d/g;
tr/1 /0|/;
print !!(<STDIN> =~ /$_/)

This is fully functional Perl code. Save the script as ov.pl and run it. The process expects two lines on standard input, each describing a set of vectors (as 01-sequences, separated by space). The script returns “1” if and only if the two sets contain an orthogonal pair of vectors.

>perl ov.pl
1000 0100 0010 1001
1110 1101 1111 1100^D
1

This construction shows that matching an n-character string against an n-character regular expression requires quadratic time under the Strong Exponential Time Hypothesis. So, the obvious quadratic-time dynamic programming algorithm for regex matching is in some sense optimal.

As to the Perl code, the first line eats the trailing newline of standard input. The next two lines contain the main part of the reduction, replacing “0” by “\d” (any digit), “1” by “0”, and “ ” by “|” to build a regex. The last line of the code attempts to match the second line from standard input to the regex. Oh, the double exclamation points? They are a perlism to enforce Boolean context, so that the output is “1” (true) exactly if the regex matches. The reduction is so obviously linear that it hurts.

Tell me this isn’t cute.

The core of the construction underlies the celebrated recent results of Backus and Indyk, Bringmann and Künnemann, and Abboud et al. that show similar bounds for various string edit distances. This particular example is just very concrete version of the NFA-acceptance reduction that appears on Karl Bringmann and Sebastian Krinninger’s slides for an MPI course on Complexity Theory of Polynomial-Time Problems (Summer 2016), there attributed to Russell Impagliazzo. I find that reduction very teachable. The step from NFA acceptance to a concrete Perl script is minimal, but makes it even more concrete (executable code!).

  • Amir Abboud, Arturs Backurs, Virginia Vassilevska Williams: Tight Hardness Results for LCS and Other Sequence Similarity Measures. IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pp. 59-78.
  • Arturs Backurs, Piotr Indyk: Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false). Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June 14-17, 2015. pp. 51-58.
  • Karl Bringmann, Marvin Künnemann: Quadratic Conditional Lower Bounds for String Problems and Dynamic Time Warping. IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015. pp. 79-97.

Hard Problems and How To Figure Them Out

Explained using only the ten hundred words people use most often.

What is a good way of figuring out a problem?

First: it must be so simple and clear that we can explain it to a computer. (Or a Human. That’s not so different.)

Second: it must be fast enough so that it is done before lunch. (Or before the sun burns out. that’s not so different.)

Since people started using computers, lots of people get paid to think about ways to figure out problems. that’s what i’m doing.  Today we understand that some problems are hard, and others are pretty easy.

I want to help us becoming better at talking about what makes a problem hard.

So far, most of the people who think about this stuff look only at the size of the problem.

But some of us feel that you can think more clearly about problems when you not only look at their size, but at some other things as well. Sometimes, this way of thinking even helps us to come up with better ways to figure out problems!

 

small1.png

small2.pngsmall3.png

Two other people and myself have come up with a way to finish the game for all five-letter words — as long as you are forced to visit only around ten words. Or maybe twenty. So we found out that the number of words all-in-all was not so important. instead, what makes the problem hard is how many you are forced to visit. We can even find the shortest such path!

small4.png

Read more here: Another man, myself, and a woman, ‘Shortest path through stuff you didn’t choose’, place where people who care about this kind of question meet each year (the name sounds like a soft drink),  year twenty-hundred and ten and two.


Appendix

The above is a summary of a poster I made for a research sharing day at ITU.

hard-problems-poster (draft) in pdf (designed for printing in A0)

I loathe poster sessions, partly because of the unclear concept of who the audience is, and mainly because the social interaction exhausts me.

But to motivate me I gave myself the challenge to design a poster in the style of  Randall Munroe’s Up-Goer Five: Use only the 1000 most used words of English, be concrete, whimsical, accessible, and informative. Munroe has since made a whole book in this style: Thing Explainer, which is quite lovely.

I did the whole thing in Omnigraffle Professional 5, which is hard pressed to handle a document with almost 6000 elements. Drawings are free-hand in Keynote (basically because I don’t have a good way of digitising my free-hand drawings — I am a competent artist, but Munroe’s style is sufficiently simple.) The font is xkcd-font from the iPython project, licensed under CCANC 3.0.

As a starting point I used a 2012 paper of Andreas, Nina, and myself that presents an efficient FPT-algorithm for shortest Steiner cycle. That’s as accessible a result as I have. The big drawing at the end shows a path from TEARS to SMILE passing through seven intervening emotions (WORRY, SPITE, etc.) This is a highly nontrivial result and a cute story that I’m insanely happy with, I haven’t found a good way of expressing this fact in only 1000 words.

Andreas Björklund, Thore Husfeldt, and Nina Taslaman, Shortest cycle through specified elements, Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms (SODA 2012),  January 17–19, 2012, Kyoto, Japan, SIAM, pages 1747-1753. [PDF].

Løsning til nr. 32

Løsning til min danske kryptokryds nr. 32

Definitioner er understregede.
* angiver omkastning, < angiver omvending.

Vandret

2 Retsforfølge og lede efter vrøvl igen. (7)
SAGSØGE. SØGE (=lede) efter GAS (vrøvl) retur (=igen)

7 Lyd fra endetarm i avlssvin. (4)
MIAV. Gemt i endetarMIAVlssvin

10 Anret sumak-arrangement fra vestafrikansk land. (10)
MAURETANSK. ANRETSUMAK*

11 To kilo i vinen gør beruset. (7)
DRUKKET. KK (to kilo) i DRUEN

13 Afsætning af glas går tilbage. (4)
SALG. GLAS<

14 Propagere for EU? Bed DR om skrivelse. (7)
UDBREDE. EUBEDDR*

15 Stagnation i Liechtenstein er tidsbegrænset. (6)
INTERTI. liechtensteINERTIdsbregrænset

16 Lige fra regnvejr til robåd. (4)
ENER. Hvert andet bogstav fra REGNVEJR

17 Svovl-ledning har forskinket hvad der blev afsat seks dage til. (9)
SKABELSEN. S(vovl) + KABEL + SEN

21 Vi pirater til søs, hvor offentlighed er uønsket. (9)
PIRATERI. VIPIRATER*

25 Storme fra gale aser. (4)
RASE. ASER*.

27 Skinger tuba afbryder stilhed uden at holde tempoet. (6)
RUBATO. TUBA* i (afbryder) RO (stilhed).

29 Gøre hastværk når man restaurerer voliere. (7)
OVERILE. VOLIERE*

30 Dyr fadæse leverer indhold. (4)
ÆSEL. fadÆSELeverer

31 Var fx Finland under den kolde krig et område med god akustik? (7)
LYDLAND. Kryptisk definition, indikeret ved “?”.

32 Besvær i knoglerne, de er slemt betændte! (10)
LEDSMERTER. DEERSLEMT*

33 Klokken er forkert. (4)
URET. Dobbelt definition.

34 Dekanen, efter omorganisation, er den, der genoptager sagen. (7)
ANKENDE. DEKANEN*.

Lodret

1 Gid prishop skaber en vred kunde, fx. (10)
HIDSIGPROP. GIDPRISHOP*

2 Svagt fra sangerinder fra sjællandsk akademi. (7)
SORANER. SOPRANER uden P (“svagt”).

3 At mule over magisk smykke. (6)
AMULET. ATMULE*.

4 Overførselsindkomst til husdyr i form af tørret frugt. (5)
SUKAT. SU + KAT.

5 Finde på nyt navn til agenda: køber flås. (7)
GENDØBE. aGENDa kØBEr flået (dvs. uden det yderste lag.)

6 Opret selvstændighedsbevægelse og Belgien griner. (7)
ETABLER. ETA + B + LER.

7 Malker forbløffende fisk. (6)
MAKREL. MALKER*

8 Medrivende og ophidsende tennis. (6)
INTENS. TENNIS*.

9 Gav kærtegn, omkring et sekund, men anstrengte sig. (5)
ASEDE. AEDE omkring S.

12 Dejlig forvirret om skingert horn er lavet stål, fx. (10)
JERNHOLDIG. DEJLIG* omkring HORN*

18 Hinduistisk princip om kulde og udstråling. (7)
KARISMA. KARMA om IS.

19 Lot vendte sig godt en halv meter omkring for at se øen. (7)
ATOLLEN. ALEN omkring LOT<

20 To gange en tønde efter aftale. (7)
ENTENTE. EN T(ønde) (to gange) E(efter)

22 Denise, efter et par øl, forbliver meget kølig. (6)
ISENDE. DENISE*

23 Vitser om Puccini, fx. (6)
VERIST. VITSER*.

24 Hård på den gamle måde, indeholder lille risiko. (6)
HASARD. HAARD omkring S (small)

26 Fiskerihavn erhverver delvist skaller. (5)
AVNER. fiskerihAVNERhverver

28 Syntes at nogen fortjente grundtemaets kerne. (5)
UNDTE. grUNDTEmaets.