Tag Archives: technology

The Open Society and Its Digital Enemies I: Knowledge (Lex conference on trustworthy knowledge, Copenhagen 2025)

On 23 April 2025 I had the privilege of speaking at the conference Trustworthy Knowledge in the AI Age, organised by the lex.dk, the organisation behind the Danish National Encyclopaedia. (I serve on the Scientific Advisory Board of that organisation.)

Both professionally and privately I spend a lot of time thinking about these issues – information, knowledge, trust, and (artificial) intelligence, etc. – more than what can be usefully covered in the 12 minute time slot given by the conference format. As a challenge to myself I decided to frame my presentation using concepts developed by Karl Popper and David Deutsch, basically in the former’s essay

Popper, Karl R. (1962). On the Sources of Knowledge and of Ignorance. Philosophy and Phenomenological Research 23 (2):292-293.

English summary

Lightly edited of Youtube’s automatic captioning (in Danish) auto-translated into English.

This year, I’ve taken on a rhetorical task: to frame my thoughts about the digital society and the relationship between computation and technology within Karl Popper’s famous work The Open Society and Its Enemies. There are at least nine chapters I could talk about (censorship, surveillance, democr, but today I’ll focus only on one: knowledge.

So, this will be an introduction to Popper’s concept of knowledge—or his epistemology, to use a fancy word—filtered through my own views. It’s more opinionated than I usually allow myself to be in public talks..

Let’s start with a crash course on networking, internet architecture, and the Internet Protocol (IP).

On your left, we see Ane, the sender. She wonders: What is trustworthy knowledge? And she wants to ask Mads, the recipient, that question. She uses the internet. The IP protocol first breaks Ane’s question into several smaller packets—let’s say four. These packets are then sent through the digital infrastructure: cables, routers, antennas, towers, and so on.

Mads is on a train with a mobile phone. The train is moving. These packets take different paths at different speeds. Some are lost, some are altered, but they finally arrive at Mads. The IP protocol ensures this delivery, and it has done so since 1981—before satellite communication, before mobile phones. It has worked for over a generation and ensures digital infrastructure does what it should.

Now, what does a packet contain? Beside the actual message fragment (the “payload”) It contains so-called metadata—or “control” data. At the bottom, you can see the sender and receiver addresses. A cell tower or satellite sees the recipient and sends the packet in the right direction, hoping some other part of the infrastructure forwards it.

That ends the technical part of this lecture. So now, after that much heavy information, it’s time for a joke. On April 1st, 2003, a member of the working group that defines these protocols proposed including the “evil bit” in the protocol metadata. Since there was some extra space in these packets, the idea was to mark packets containing evil information—hack tools, malware, etc. The internet could then prevent their transmission.

If the bit is set to zero, the packet has no evil intent. If set to one, it does.

This is a joke. It’s an April Fools’ prank. Still, it reflects our moral intuition: wouldn’t it be nice if the internet could tell us what information is good or bad, safe or unsafe, trustworthy or not?

My aim is to convince you that this is the wrong question. It is an epistemologically uninformed question with no good answer. Mind you, it’s psychologically hard for us to accept that it’s the wrong question—because it feels like a good one. I feel it too. But asking the question is like falling for the April Fool’s joke.

To illustrate that bad questions can lead us astray, let’s consider another example from biology: What is the purpose of biodiversity? Many smart people thought hard about this for 2000 years—Lamarck, Aristotle—and came to the wrong conclusions.

Why? Because it was the wrong question. Darwin’s insight was that there is no plan, no purpose, no “good species”. Instead, today we know how to ask: what is the mechanism? And the answer is: mutation and selection. Or in Popper’s philosophy: conjecture and refutation. So again: smart people spent centuries on the wrong question until someone asked the right one—and that changed our understanding of biology for the better.

Similarly, in Popper’s The Open Society (1944), he addresses another famous wrong question: Who should rule?

This question had occupied moral thinkers for 2000 years: their answer included the wise philosopher-king, various gods, the proletariat, the sans-culottes, priests, the intellectuals. All these answers were wrong, for the same reason: it’s the wrong question.

Popper’s key insight is that this question leads to tyranny. If someone has unique authority to rule, then protecting that authority becomes virtuous, and criticism therefore becomes immoral. The right question is instead: How can we remove bad rulers? Because mistakes are inevitable, and we need mechanisms to replace those in power.

The answers? Freedom of speech and free elections. That’s the foundation of the open society.

Now, applying this to knowledge: What is the foundation of knowledge?—is also the wrong question. People have tried to answer this for millennia. Plato said knowledge is justified and true belief. Rationalists said it’s based on reason. Empiricists said it’s based on experience. And so on. All of these answers are in conflict with each other. And again, Popper says: they’re all wrong, because the question is wrong.

For knowledge is not based on anything. Instead, knowledge is a guess that has survived criticism. It’s the best explanation we currently have—until it’s refuted.

So: guesses-and-criticism, hypotheses-and-falsification, mutation-and-selection—it’s all the same idea. And tt’s a very practical philosophy, and it tells us what to do: we need the open society to find correct information, just like we democracy to replace bad leaders.

Let me summarise: many traditional views of knowledge—like Plato’s or those of rationalists and empiricists—are not just wrong but wrong for the same reason: they assume knowledge must have a foundation.

Popper’s view is that knowledge is not grounded in anything. It is provisional, yet objective.

The two sides are fundamentally opposed:

  • One side says: true knowledge must be protected from misinformation.
  • Popper’s side says: true knowledge must be exposed to criticism.

These lead to very different practical conclusions for how to build an information society.

So let’s return to the internet protocol. If you paid attention during slide 2, you saw some packets were lost or altered. Mads notices he’s missing the third packet and that the fourth may be corrupted. He sends two messages back: “I didn’t get packet three, and I don’t trust packet four—can you resend them?”

Ane resends them. This happens all the time during internet transfer, whenever you send a message. Even now, for those watching this talk online. Packet loss, message corruption, and error correction are constant and happen in milliseconds. They’re part of the internet protocol.

The essential metadata of IP therefore isn’t just the sender and receiver addresses—we’ve been used to those from paper-based postal services. Instead,  equally important are the error detection mechanisms.

Thus, error detection correction is indispensable part of the protocol, while the “evil bit” is a joke. If you understand that distinction, I have succeeded in this talk.

To summarize:

There are two opposing conclusions about trustworthiness and information:

  1. The traditional view says trustworthy information is based on good arguments and must be protected.
  2. The Popperian view—which I believe—is that trustworthy information is what has withstood criticism. It’s defined by the absence of counterarguments, not the presence of supporting evidence.

So the ethical obligation becomes: We must preserve methods of error detection.