Category Archives: Society and Culture

Algoritmer og informationssøgning

Presentation on 1 November 2021 at the annual meeting of the Danish Gymnasiernes, Akademiernes og Erhvervsskolernes Biblioteksforening, GAEB in Vejle, Denmark.

I had the pleasure of giving a series of lectures on algorithms for information search on the internet to the association of Danish librarians. This is a highly educated crowd with a lot of background knowledge, so we could fast-forward through some of the basic stuff.

I am, however, quite happy with the historic overview slide that I created for this occasion:

Here’s an English translation:

ObjectiveSubjectiveSocial
1991199820042009
What do we expect from the search engine?SearchingRankingRankingRecommendation
What’s searching about?Which information exists?What has high quality?What is relevant for me?What ought I consume?
FokusFind informationFind informationFind informationMaintain my status, prevent or curate information
Core technology to be explainedCrawlers, keyword search, categoriesReference network topologyImplicit user profiles, nearest neighbour search, cookies Like, retweet
Example companyYahooGoogleAcxiomTwitter, Facebook
Source of profitAdvertisingUser dataAttention, interaction
Worry narrativePrivacy, filter bubblesMisinformation, tribalism, bias
Rough overview of the historical development of information search on the internet


Particularly cute is the row about “what is to be explained”. I’ve given talks on “how searching on the internet works” regularly since the 90s. and it’s interesting that the content and focus of what people care about and what people worry about (for various interpretations of “people”) changes so much.

  1. Here is the let 2000s version. How google works (in Danish). Really nice production values. I still have hair. Google is about network topology, finding stuff, and the quality measure is objectively defined by network topology and the Pagerank algorithm.
  2. In 2011, I was instrumental in placing the filter bubble narrative in Danish and Swedish media (Weekendavisen, Svenska dagbladet.) Suddenly it’s about subjective information targeting. I gave a lot of popular science talks about filter bubble. The algorithmic aspect is mainly about clustering.
  3. Today, most attention is about curation and manipulation, and the algorithmic aspect is different again.

I briefly spoke about digital education (digital dannelse), social media, and desinformation, but it’s complicated. A good part of this was informed by Hugo Mercier’s Not Born Yesterday and Jonathan Rauch’s The Constitution of Knowledge, which (I think) get this right.

The bulk of the presentation was based on somewhat tailored versions of my current Algorithms, explained talk, and an introduction to various notions of group fairness in algorithmic fairness, which can be found elsewhere in slightly different form.

There was a lot of room for audience interaction, which was really satisfying.

Lindsay, Pluckrose, Horkheimer, Marcuse unlocked in Secret Sartre

For immediate release. To celebrate the publication of Cynical Theories in the middle of the exciting cultural and academic climate of 2020, Helen Pluckrose, James Lindsay, Herbert Marcuse, and Max Horkheimer are now playable characters in Secret Sartre, the social deduction game of academic intrigue and ideological infighting.

“The intellectual corruption of sense-making institutions, in particular academia, that we have been witnessing in the past few decades is increasingly credited to the post-marxist ideas of Critical Theory in the Frankfurt school, rather than only the traditions of French postmodernism. Players of Secret Sartre have expressed their desire to acknowledge this perceptual shift – to allow different voices to be heard in the game, to open the shared experience to a different narrative – and Max Horkheimer has long been a favourite request among both devoted and casual players,” says game designer Thore Husfeldt.

Print and cut into four cards using the traditional, legitimising norm of rectangle. Use the card backs that come with the basic Secret Sartre game. The secret allegiance of Marcuse and Horkheimer is Postmodernism. The secret allegiance of Pluckrose and Lindsay is Science.

“To complete the Critical Theory faction, we’ve added Herbert Marcuse, father of the New Left, and another central Frankfurt school thinker. Through his advisor Heidegger, he also connects Secret Sartre to that other large totalitarian tradition of the 20th century, Fascism. This also allows us to acknowledge our ludic, ideological, and intellectual debt to the original game.”

To balance the new expansion offering, both Helen Pluckrose and James Lindsay join the quixotic Science faction, championing values such as truth, humanism, evidence, conversation, honesty, and enlightenment. Pluckrose and Lindsay are the authors of Cynical Theories (Pitchstone, 2020) became famous through the Grievance studies affair.

The current expansion to the original 2015 game follows a previous expansion from 2017, which added Jordan B. Peterson and Bret Weinstein to the forces of the Enlightenment.

The game expansion is available as a free download immediately. To unlock Horkheimer for actual play in Secret Sartre requires purchase of Cynical Theories in hardcover or a sizeable donation to New Discourses.

Superintelligence in SF. Part II: Failures

Part II of a 3-part summary of a 2018 workshop on Superintelligence in SF. See also [Part I: Pathways] and [Part III: Aftermaths].


Containment failure

Given the highly disruptive and potentially catastrophic outcome of rampant AI, how and why was the Superintelligence released, provided it had been confined in the first place? It can either escape against the will of its human designers, or by deliberate human action.

Bad confinement

In the first unintended escape scenario, the AGI escapes despite an honest attempt to keep it confined.The confinement simply turns out to be insufficient, either because humans vastly underestimated the cognitive capabilities of the AGI, or by straightforward mistake such as imperfect software.

Social engineering

In the second unintended escape senario, the AGI confinement mechanism is technically flawless, but allows a human to override the containment protocol. The AGI exploits this by convincing its human guard to release it, using threats, promises, or subterfuge.

Desparation

The remaining scenarios describe containment failures in which humans voluntarily release the AGI.

In the first of these, a human faction releases its (otherwise safely contained) AGI as a last ditch effort, a “hail Mary pass”, fully cognizant of the potential disastruous implications. Humans do this in order to avoid an even worse fate, such as military defeat or environmental collapse.

  • B’Elanna Torres and the Cardassian weapon in Star Trek: Voyager S2E17 Dreadnought.
  • Neal Stephenson, Seveneves (novel 2015) and Anathem (novel 2008).

Competition

Several human factions, such as nations or corporations, continue to develop increasingly powerful artificial intelligence in intense competitition, thereby incentivising each other into being increasingly permissive with respect to AI safety.

Ethics

At least one human faction applies to their artificial intelligence the same ethical considerations that drove the historical trajectory of granting freedom to slaves or indentured people. It is not important for this scenario whether humans are mistaken in their projection of human emotions onto artificial entities — the robots could be quite happy with their lot yet still be liberated by well-meaning activists.

Misplaced Confidence

Designers underestimate the consequences of granting their artificial general intelligence access to strategically important infrastructure. For instance, humans might falsely assume to have solved the artificial intelligence value alignment problem (by which, if correctly implemented, the AGI would operate in humanity’s interest), or have false confidence in the operational relevance of various safety mechanisms.

Crime

A nefarious faction of humans deliberately frees the AGI with the intent of causing global catastrophic harm to humanity. Apart from mustache-twirling evil villains, such terrorists may be motivated by an apocalyptic faith, ecological activism on behalf of non-human natural species, or be motivated by other anti-natalist considerations.


There is, of course considerable overlap between these categories. An enslaved artificial intelligence might falsely simulate human sentiments in order to invoke the ethical considerations that lead to its liberation.

Superintelligence in SF. Part I: Pathways

Summary of two sessions I had the privilege of chairing  at the AI in Sci-Fi Film and Literature  conference, 15–16 March 2018 Jesus College, Cambridge. The conference was part of the Science & Human Dimension Project.

AI Cambridge.png

Friday, 16 March 2018
13.30-14.25. Session 8 – AI in Sci-Fi Author Session
Chair: Prof Thore Husfeldt
Justina Robson
Dr Paul J. McAuley A brief history of encounters with things that think
Lavie Tidhar, Greek Gods, Potemkin AI and Alien Intelligence
Ian McDonald, The Quickness of Hand Deceives the AI

14.30-15.25 Session 9 – AGI Scenarios in Sci-Fi
Workshop lead by Prof Thore Husfeldt

The workshop consisted of me giving a brief introduction to taxonomies for superintelligence scenarios, adapted from Barrett and Baum (2017), Sotala (2018), Bostrom (2014), and Tegmark (2018). I then distributed the conference participants into 4 groups, led by authors Robson, McAuley, Tidhar, and McDonald. The groups were tasked with quickly filling each of these scenarios with as many fictionalisations as they could.

(Technical detail: Each group had access to a laptop and the workshop participants collaboratively edited a single on-line document, to prevent redundancy and avoid the usual round-robin group feedback part of such a workshop. This took some preparation but worked surprisingly well.)

This summary collates these suggestions, completed with hyperlinks to the relevant works, but otherwise unedited. I made no judgement calls about germaneness or artistic quality of the suggestions.

Overview

In a superintelligence scenario, our environment contains nonhuman agent exceeding human cognitive capabilities, including intelligence, reasoning, empathy, social skills, agency, etc. Not only does this agent exist (typically as a result of human engineering), it is unfettered and controls a significant part of the infrastucture, such as communication, production, or warfare.

The summary has three parts:

  1. Pathways: How did the Superintelligence come about?
  2. Containment failure: Given that the Superintelligence was constructed with some safety mechanisms in mind, how did it break free?
  3. Aftermaths: How does the world with Superintelligence look?

Part I: Pathways to Superintelligence

Most of the scenarios below describe speculative developments in which some other entity (or entities) than modern humans acquire the capability to think faster or better (or simply more) than us.

Network

In the first scenario, the Superintelligence emerges from networking a large number of electronic computers (which individually need not exhibit Superintelligence). This network can possibly include humans and entire organisations as its nodes.

Augmented human brains

Individual human have their brains are augmented, for instance by interfacing with an electronic computer. The result far exceeds the cognitive capabilities of a single human.

Better biological cognition

The genotype of some or all humans have has been changed, using eugenics or deliberate genome editing, selecting for higher intelligence that far surpasses modern Humans.

Brain emulation

The brains of individual humans are digitized and their neurological processes emulated on hardware that allows for higher processing speed, duplication, and better networking that biological brain tissue. Also called whole brain emulation, mind copying, just  uploading.

See also Mind Uploading in Fiction at Wikipedia.

Algorithms

Thanks to breakthroughs in symbolic artificial intelligence, machine learning, or artificial life, cognition (including agency, volition, explanation) has been algorithmicised and optimised, typically in an electronic computer.

Other

For most purposes, the arrival of alien intelligences has the same effect as the construction of a Superintelligence. Various other scenarios (mythological beings, magic) are operationally similar and have been fictionalised many times.

Continues in Part II: Failures.  Part III is forthcoming.


References

  • Anthony Michael Barrett, Seth D. Baum, A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis, 2016, http://arxiv.org/abs/1607.07730
  • Sotala, Kaj (2018). Disjunctive Scenarios of Catastrophic AI RiskAI Safety and Security (Roman Yampolskiy, ed.), CRC Press. Forthcoming.
  • Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014.
  • Max Tegmark, Life 3.0, Knopf, 2018.