Tag Archives: Science Fiction

Superintelligence in SF. Part II: Failures

Part II of a 3-part summary of a 2018 workshop on Superintelligence in SF. See also [Part I: Pathways] and [Part III: Aftermaths].

Containment failure

Given the highly disruptive and potentially catastrophic outcome of rampant AI, how and why was the Superintelligence released, provided it had been confined in the first place? It can either escape against the will of its human designers, or by deliberate human action.

Bad confinement

In the first unintended escape scenario, the AGI escapes despite an honest attempt to keep it confined.The confinement simply turns out to be insufficient, either because humans vastly underestimated the cognitive capabilities of the AGI, or by straightforward mistake such as imperfect software.

Social engineering

In the second unintended escape senario, the AGI confinement mechanism is technically flawless, but allows a human to override the containment protocol. The AGI exploits this by convincing its human guard to release it, using threats, promises, or subterfuge.


The remaining scenarios describe containment failures in which humans voluntarily release the AGI.

In the first of these, a human faction releases its (otherwise safely contained) AGI as a last ditch effort, a “hail Mary pass”, fully cognizant of the potential disastruous implications. Humans do this in order to avoid an even worse fate, such as military defeat or environmental collapse.

  • B’Elanna Torres and the Cardassian weapon in Star Trek: Voyager S2E17 Dreadnought.
  • Neal Stephenson, Seveneves (novel 2015) and Anathem (novel 2008).


Several human factions, such as nations or corporations, continue to develop increasingly powerful artificial intelligence in intense competitition, thereby incentivising each other into being increasingly permissive with respect to AI safety.


At least one human faction applies to their artificial intelligence the same ethical considerations that drove the historical trajectory of granting freedom to slaves or indentured people. It is not important for this scenario whether humans are mistaken in their projection of human emotions onto artificial entities — the robots could be quite happy with their lot yet still be liberated by well-meaning activists.

Misplaced Confidence

Designers underestimate the consequences of granting their artificial general intelligence access to strategically important infrastructure. For instance, humans might falsely assume to have solved the artificial intelligence value alignment problem (by which, if correctly implemented, the AGI would operate in humanity’s interest), or have false confidence in the operational relevance of various safety mechanisms.


A nefarious faction of humans deliberately frees the AGI with the intent of causing global catastrophic harm to humanity. Apart from mustache-twirling evil villains, such terrorists may be motivated by an apocalyptic faith, ecological activism on behalf of non-human natural species, or be motivated by other anti-natalist considerations.

There is, of course considerable overlap between these categories. An enslaved artificial intelligence might falsely simulate human sentiments in order to invoke the ethical considerations that lead to its liberation.

Superintelligence in SF. Part I: Pathways

Summary of two sessions I had the privilege of chairing  at the AI in Sci-Fi Film and Literature  conference, 15–16 March 2018 Jesus College, Cambridge. The conference was part of the Science & Human Dimension Project.

AI Cambridge.png

Friday, 16 March 2018
13.30-14.25. Session 8 – AI in Sci-Fi Author Session
Chair: Prof Thore Husfeldt
Justina Robson
Dr Paul J. McAuley A brief history of encounters with things that think
Lavie Tidhar, Greek Gods, Potemkin AI and Alien Intelligence
Ian McDonald, The Quickness of Hand Deceives the AI

14.30-15.25 Session 9 – AGI Scenarios in Sci-Fi
Workshop lead by Prof Thore Husfeldt

The workshop consisted of me giving a brief introduction to taxonomies for superintelligence scenarios, adapted from Barrett and Baum (2017), Sotala (2018), Bostrom (2014), and Tegmark (2018). I then distributed the conference participants into 4 groups, led by authors Robson, McAuley, Tidhar, and McDonald. The groups were tasked with quickly filling each of these scenarios with as many fictionalisations as they could.

(Technical detail: Each group had access to a laptop and the workshop participants collaboratively edited a single on-line document, to prevent redundancy and avoid the usual round-robin group feedback part of such a workshop. This took some preparation but worked surprisingly well.)

This summary collates these suggestions, completed with hyperlinks to the relevant works, but otherwise unedited. I made no judgement calls about germaneness or artistic quality of the suggestions.


In a superintelligence scenario, our environment contains nonhuman agent exceeding human cognitive capabilities, including intelligence, reasoning, empathy, social skills, agency, etc. Not only does this agent exist (typically as a result of human engineering), it is unfettered and controls a significant part of the infrastucture, such as communication, production, or warfare.

The summary has three parts:

  1. Pathways: How did the Superintelligence come about?
  2. Containment failure: Given that the Superintelligence was constructed with some safety mechanisms in mind, how did it break free?
  3. Aftermaths: How does the world with Superintelligence look?

Part I: Pathways to Superintelligence

Most of the scenarios below describe speculative developments in which some other entity (or entities) than modern humans acquire the capability to think faster or better (or simply more) than us.


In the first scenario, the Superintelligence emerges from networking a large number of electronic computers (which individually need not exhibit Superintelligence). This network can possibly include humans and entire organisations as its nodes.

Augmented human brains

Individual human have their brains are augmented, for instance by interfacing with an electronic computer. The result far exceeds the cognitive capabilities of a single human.

Better biological cognition

The genotype of some or all humans have has been changed, using eugenics or deliberate genome editing, selecting for higher intelligence that far surpasses modern Humans.

Brain emulation

The brains of individual humans are digitized and their neurological processes emulated on hardware that allows for higher processing speed, duplication, and better networking that biological brain tissue. Also called whole brain emulation, mind copying, just  uploading.

See also Mind Uploading in Fiction at Wikipedia.


Thanks to breakthroughs in symbolic artificial intelligence, machine learning, or artificial life, cognition (including agency, volition, explanation) has been algorithmicised and optimised, typically in an electronic computer.


For most purposes, the arrival of alien intelligences has the same effect as the construction of a Superintelligence. Various other scenarios (mythological beings, magic) are operationally similar and have been fictionalised many times.

Continues in Part II: Failures.  Part III is forthcoming.


  • Anthony Michael Barrett, Seth D. Baum, A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis, 2016, http://arxiv.org/abs/1607.07730
  • Sotala, Kaj (2018). Disjunctive Scenarios of Catastrophic AI RiskAI Safety and Security (Roman Yampolskiy, ed.), CRC Press. Forthcoming.
  • Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014.
  • Max Tegmark, Life 3.0, Knopf, 2018.

Useless side plot with Rose and Finn both acting weird! (5,5)

Inspired, after a fashion, by The Last Jedi, Episode VIII of the Star Wars movies, here’s a cryptic crossword. Enjoy!


As a PDF: no35

1 Blaster Solo mostly nudged by mistake. (7)
5 Just in time, pervading magical force comes back, allowing clear dialogue. (7)
9 Glaring farce, bad editing without direction. (9)
10 Part of film containing origin of Ben Solo. (5)
11 Watch cyborg’s central turret structure. (6)
12 Kick oneself for taking side—postmodern garbage. (7)
14 Current leaders of rebellion investigate Sarlacc’s orifice. (4)
15 In retrospect, volcano remade operatic Star Wars charac- ter. (3,7)
19 Useless side plot with Rose and Finn both acting weird! (5,5)
20 Maybe white males immediately understand agenda. (4)
22 Expand General Grievous. (7)
25 Want Vader to embrace right side of Force. (6)
27 Acid dissolves badly-written minion without explanation, ultimately. (5)
28 Gender-bending ruined tale. Which function did Threepio serve? (9)
29 At the end of the day, Jedi virtues were misunderstood. (7)
30 Hostile alien’s stones. (7)

1 Obi-Wan looks like one who boldy shows it. (4)
2 First off, contrarian lambasted storytelling. (9)
3 Perceptive newspaper journalist supports infuriating, unsubstantial review. (6)
4 They are filled with lifeless actors, like first prequel or clone wars. (9)
5 Cassian conjunctions. (5)
6 Boring writing without emotion ultimately sucks. (8)
7 Made snide remark about dumb Last Jedi. (5)
8 Period of mother-in-law’s hestitation after supreme leader finally confused two characters. (10)
13 Terrible direction without point or taste. (10)
16 Remove classic monster? (9)
17 Alt-right teen stuck in tirade; not salient. (9)
18 Problem for armored Stormtrooper: stop odor eruption. (8)
21 Pilot mostly followed by the Spanish fictional sibling. (6)
23 Dishonest apprentice girl embodies passive female principle. (5)
24 Return of Jedi’s second space endeavour. (5)
26 Wagers Count Snoke’s head. (4)