Author Archives: thorehusfeldt

Will Code for Drinks @ Scrollbar 2018

Homemade poster for the event

I am foremost a teacher, and I care a lot about introductory programming, computational thinking, algorithms, and the social environment of a university.

As an experiment in “social coding,” we implemented an event called “Will Code for Drinks @ Scrollbar” in the Friday bar at IT University of Copenhagen on 23 November 2018. The basic idea is to get beginning programmers—this includes 1st semester students and professors—together for a few hours, solve some well-defined programming exercises, and get a drink for each solved exercise.

Preparation

The idea was born over a couple of lunches with colleagues, and thanks to the enthusiasm of Martin Aumüller and Troels Lund it quickly developed momentum.

The platform we used for this is Kattis (open.kattis.com), which is a well-working system developed for programming competitions. Kattis comes with thousands of extremely well-done exercises, a reliable server with an accessible web interface, and very simple procedures for registering individuals, forming teams, and hosting contests.

Preparation included:

  1. Establishing rapport with the ITU Friday bar—a student-run organisation that hosts a social event every Friday afternoon during the semester. They loved the idea, and we were able to find a free date where no other ScrollBar event or theme (Halloween) was already planned, and no other ITU event (Board Game night!) was scheduled elsewhere in the house by one of the many other social committees.
  2. Learning Kattis and solving dozens of exercises there in order to find a selection of the event. The idea was that students with very little programming experience were supposed to at least solve one exercise, possibly with help, during the event. The selection we ended up with were Baby Bites, Spavanac, the lovely Trik, and the somewhat harder Bank Queue. We figured that 3 drinks are enough for anybody, and wanted a more challenging problem to keep everybody engaged during the event.
  3. Spreading the word among first-year programming teachers and their students. During the weeks up the event, our enthusiasm infected several students, who registered on Kattis and started grinding in preparation.
  4. Decisions, decisions… should the event be called “Will Code for Beer” instead? Drink or drinks? How competitive should we make it?
  5. Creating a visual identity. My original plan was to use a lot of images of various people holding cardboard “Will Code for Drinks” signs. In the end, it was too much work (after talking to the communications department about GDPR), and I cobbled together a clean visual identity on my computer in a couple of minutes. Another reason to reject the hobo-theme is that it possibly repels students who are concerned about appearances.
  6. Find some way to pay for the resulting bar tab, and that will be acceptable to the Accounting and Finances section.
  7. Design and order caps so that assistants (Martin, Troels, and myself) would be visible during the contest. Alas, the caps were not delivered on time.
An early experiment in developing a visual identity for the event. In the end, I rejected this direction, despite the great resonance among students and colleagues, for purely aesthetic reasons.

During the event

The event was “just” a contest on the open Kattis server  https://open.kattis.com/contests/f4ktq9

The moment the contest started at 15:30, we had most of the students in the same room adjacent to the bar, so we could help with Kattis registration, logging in, reading from standard input, etc. After that, participants slowly moved into ScrollBar and the ITU Atrium. We kept ourselves visible and available, and helped with programming and problem solving.

After

I spent a few hours writing emails to individual groups that I had talked to during the contest, explaining other approaches to specific tasks. Then I sent a brief thank-you note to all participants that I could identify and invited feed-back and suggestions for improvement. This was quite boring, I had to identify partipants  who had registered under their own name on Kattis and had a name I could uniquely find in the ITU student roster.

Evaluation

This was supposed to be a test run, and I had hoped for 5 teams of students. In reality, slightly over 50 teams registered, with 130 participants. Stunning success!

Will Code for Drinks @ ScrollBar 2018 in full activity. Photo by Troels Lund.

Of the participants I was able to identify, 48 are first-semester students. These were the intended target group. More that half of the students are from the educations hosted by the Computer Science department, but all of ITU’s student populations were present. 45 teams solved at least one problem, 35 teams solved three. 10 teams solved all four problems; this includes the teams consisting of faculty members and Ph.D. students. Phew!

In the end, the “damage” was 183 beers, 80 cocktails, and 8 soft drinks. In total, students solved 132 programming exercises in 2.5 hours, and fun was had. As a teacher, I couldn’t be happier.

In just a few weeks, ITU is now the second-largest and second-ranked Danish uni on Kattis. Aarhus is still way ahead.

Future

I would love to make this event even more social and less competitive. An idea that came up during the contest was to have the scoreboard ranked by “most recent solve” rather than “number of solves”. That way, every team gets to be at the top at least once. Removing the scoreboard entirely is another option, but that removes the shared digital forum – in effect, all the teams would exist in their own little bubble.

The best idea we’ve come up with in this vein is to couple the teams with music playlists. Then the current leader (i.e., the team that most recently solved a problem) would decide which music is played in the bar. “Will Code for Drinks and Music” or “Will Code for Drinks and Rick Roll” or something. To make this work, we need a more advanced registration system, and we’d need to scrape the standings off the Kattis server.  All doable.

Another improvement would be to have our own, ITU- or ScrollBar-branded problems instead of relying on (often well-known) problems from the Kattis pool. We could switch to another system than Kattis (or build our own) but that is a lot of work, and there is intrinsic value in incentivising students to register on Kattis.

No matter the form, we will certainly do this again in Spring 2019!

A Glimpse of Algorithmic Fairness

Workshop presentation at Ethical, legal & social consequences of artificial intelligence, Network for Artificial Intelligence and Machine Learning at Lund University (AIML@LU), Lund University, 22 November 2018.

Abstract

Several recent results in algorithms address questions of algorithmic fairness — how can fairness be axiomatised and measured, to which extent can bias in data capture or decision making be identified and remedied, how can different conceptualisations of fairness be aligned, which ones can be simultaneously satisfied. What can be done, and what are the logical and computational limits?

I give a very brief overview of some recent results in the field aimed at an audience assumed to be innocent of algorithmic thinking. The presentation includes a brief description of the location of the field algorithms among other disciplines, and the mindset of algorithmic or computational thinking. The talk includes pretty shapes that move about in order to communicate some intuition about the results, but is otherwise unapologetic about the fact that the arguments are ultimately formal and precise, which is important for addressing fairness in a transparent and accountable fashion.

References

Toon Calders, Sicco Verwer: Three naive Bayes approaches for discrimination-free classification. Data Min. Knowl. Discov. 21(2): 277-292 (2010). [PDF at author web page]

Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. [arXiv 1703.00056]

Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard S. Zemel:
Fairness through awareness. Innovations in Theoretical Computer Science 2012: 214-226. [arXiv 1104:3913]

Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, Suresh Venkatasubramanian: Certifying and Removing Disparate Impact. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015. [arXiv 1412.3756]

Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian: On the (im)possibility of fairness. [arXiv:1609.07236]

Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum: Multicalibration: Calibration for the (Computationally-Identifiable) Masses. Int. Conf. Machine Learning 2018: 1944-1953. [Proceedings PDF]

Jon M. Kleinberg, Sendhil Mullainathan, Manish Raghavan: Inherent Trade-Offs in the Fair Determination of Risk Scores. Innovations in Theoretical Computer Science 2017: 43:1-43:23. [arXiv 1609:05807]


(The image at the top, the title slide of my presentation, shows a masterpiece of the early Renaissance, Fra Angelico’s The Last Judgement (ca. 1430), illustrating a binary classifier with perfect data access and unlimited computational power.)

Excellence in Teaching, ITU 2018

(I’m really proud about having received ITU’s Excellence in Teaching award for 2018. I am first and foremost a teacher, and view education as my most meaningful task. (It’s also the only thing that I am really good at.) Having my work recognised is immensely satisfying.

 

mads, thore.png

ITU Vice Chancellor Mads Tofte presents the award. 23 September 2018.

Here is the laudation from Vice Chancellor Mads Tofte:

Award for excellence in teaching

Every year, ITU awards a few teachers its Award for Excellence in Teaching. We do so based on what students have said about teachers in their evaluations. It is very difficult to choose, because students say so many positive things about so many different teachers. But we have arrived at the following, one from each of our three departments:

Associate Professor Thore Husfeldt of the Department of Computer Science

During the past year, Thore has been teaching “Algorithm Design”, “Algorithms and Data Structures” and “Foundations of Computing – Algorithms and Data Structures”. Here are some of the things that students write about Thore:

  • Absolute rock star
  • Best lecturer on ITU, hands down
  • Thore makes things, that seemed very complicated when reading the book, seem easy peasy when he explains them at the lectures.
  • Jeg kan godt lide den del af undervisningen, hvor vi ved håndsoprækning skal stemme om, hvilket svar vi tror er rigtigt på et spørgsmål. Typisk gør det, at Thore godt fatter, hvad det er, han skal uddybe.
  • The black board exercises in the lectures are a brilliant break in between the coding and explaining.
  • Very engaged! Obviously very capable and passionate about teaching this subject
  • It is the one class I don’t miss, even if it’s at 8am and that’s because of the quality of the teacher

Superintelligence in SF. Part III: Aftermaths

Part II of a 3-part summary of a 2018 workshop on Superintelligence in SF. See also [Part I: Pathways] and [Part II: Failures].

The third and final part of apocalyptic taxonomy describes the outcome, or aftermath, of the emergence and liberation of artificial superintelligence. The list of scenarios is taken from Tegmark (2018). In my introductory slide I tried to roughly order these scenarios along two axes, depending on the capability of the superintelligence, and the degree of control.

 

Tegmark.001

These scenarios are not clearly delineated, nor are they comprehensive. There is a longer description in Chapter 5 of Tegmark (2018). Another summary is at AI Aftermath Scenarios at the Future of Life Institute blog, where you can also find the results of a survey about which scenario to prefer.


Collapse

Intelligent life on Earth becomes extinct before a superintelligence is ever developed because civilisation brings about its own demise by other means than the AI apocalypse.

Reversion

Society has chosen deliberate technological regression, so as to forever forestall the development of superintelligence. In particular, people have abandoned and outlawed research and development in relevant technologies, including many discoveries from the industrial and digital age, possibly even the scientific method. This decision be in reaction to a previous near-catastrophic experience with such technology.

Turing Police

Superintelligence has not been developed, and societies have strict control mechanisms that prevent research and development into relevant technologies. This may be enforced by a totalitarian state using the police or universal surveillance.

Tegmark’s own label for this scenario is “1984,” which was universally rejected by the workshop.

Egalitarian Utopia

Society includes humans, some of which are technologically modified, and uploads.
The potential conflict arising from productivity differentials between these groups are avoided by abolishing property rights.

Libertarian Utopia

Society includes humans, some of which may be technologically modified, and uploads. Biological life and machine life have segregated into different zones. The economy is almost entirely driven by the fantastically more efficient uploads. Biological humans peacefully coexist with these zones, benefit from trading with machine zones; the economic, technological, and scientific output of humans is irrelevant.

Gatekeeper AI

A single superintelligence has been designed. The value alignment problem has been resolved in the direction that the superintelligence has one single goal: to prevent the second superingelligence, and to interfere as little as possible with human affairs. This scenario differs from the Turing police scenario in the number of superintellinces actually constructed (0 versus 1) and need not be a police state.

Descendants

The superintelligence has come about by a gradual modification of modern humans. Thus, there is no conflict between the factions of “existing biological humans” and “the superintelligence” – the latter is simply the descendant life form of the former. “They” are “we” or rather, “our children.” 21st century homo sapiens is long extinct, voluntarily, just as each generation of parents faces extinction.

Enslaved God

The remaining scenarios all assume a superingelligence of vastly superhuman intellect. They differ in how much humans are “in control.”

In the Enslaved God scenario, the safety problems for developing superintelligence (control, value alignment) have been solved. The superingelligence is a willing, benevolent, and competent servant to its human masters.

Protector God

The superintelligence weilds significant power, but remains friendly and discreet, nudging humanity unnoticably into the right direction without being too obvious about it. Humans retain an illusion of control, their lives remaing challenging and feel meaningful.

Benevolent Dictator

The superintelligence is in control, and openly so. The value alignment problem is solved in humanity’s favour, and the superintelligence ensures human flourishing. People are content and entertained. Their lives are free of hardship or even challenge.

Zookeeper

The omnipotent superintelligence ensures that humans are fed and safe, maybe even healthy. Human lives are comparable to those of zoo animals, they feel unfree, may be enslaved, and are significantly less happy that modern humans.

Conquerors

The superintelligence has not kept humans around. Humanity is extinct and has left no trace.


Oz

Workshop participants quickly observed the large empty space in the lower left corner! In that corner, no superintelligence has been developed, yet the (imagined) superintelligence would be in control.


Other fictional AI tropes are out of scope. In particular the development of indentured mundane artificial intelligences, which may outperform humans in specific cognitive tasks (such as C3P0s language facility or many space ship computers), without otherwise exhibiting superior reasoning skills.

 

Superintelligence in SF. Part II: Failures

Part II of a 3-part summary of a 2018 workshop on Superintelligence in SF. See also [Part I: Pathways] and [Part III: Aftermaths].


Containment failure

Given the highly disruptive and potentially catastrophic outcome of rampant AI, how and why was the Superintelligence released, provided it had been confined in the first place? It can either escape against the will of its human designers, or by deliberate human action.

Bad confinement

In the first unintended escape scenario, the AGI escapes despite an honest attempt to keep it confined.The confinement simply turns out to be insufficient, either because humans vastly underestimated the cognitive capabilities of the AGI, or by straightforward mistake such as imperfect software.

Social engineering

In the second unintended escape senario, the AGI confinement mechanism is technically flawless, but allows a human to override the containment protocol. The AGI exploits this by convincing its human guard to release it, using threats, promises, or subterfuge.

Desparation

The remaining scenarios describe containment failures in which humans voluntarily release the AGI.

In the first of these, a human faction releases its (otherwise safely contained) AGI as a last ditch effort, a “hail Mary pass”, fully cognizant of the potential disastruous implications. Humans do this in order to avoid an even worse fate, such as military defeat or environmental collapse.

  • B’Elanna Torres and the Cardassian weapon in Star Trek: Voyager S2E17 Dreadnought.
  • Neal Stephenson, Seveneves (novel 2015) and Anathem (novel 2008).

Competition

Several human factions, such as nations or corporations, continue to develop increasingly powerful artificial intelligence in intense competitition, thereby incentivising each other into being increasingly permissive with respect to AI safety.

Ethics

At least one human faction applies to their artificial intelligence the same ethical considerations that drove the historical trajectory of granting freedom to slaves or indentured people. It is not important for this scenario whether humans are mistaken in their projection of human emotions onto artificial entities — the robots could be quite happy with their lot yet still be liberated by well-meaning activists.

Misplaced Confidence

Designers underestimate the consequences of granting their artificial general intelligence access to strategically important infrastructure. For instance, humans might falsely assume to have solved the artificial intelligence value alignment problem (by which, if correctly implemented, the AGI would operate in humanity’s interest), or have false confidence in the operational relevance of various safety mechanisms.

Crime

A nefarious faction of humans deliberately frees the AGI with the intent of causing global catastrophic harm to humanity. Apart from mustache-twirling evil villains, such terrorists may be motivated by an apocalyptic faith, ecological activism on behalf of non-human natural species, or be motivated by other anti-natalist considerations.


There is, of course considerable overlap between these categories. An enslaved artificial intelligence might falsely simulate human sentiments in order to invoke the ethical considerations that lead to its liberation.

Superintelligence in SF. Part I: Pathways

Summary of two sessions I had the privilege of chairing  at the AI in Sci-Fi Film and Literature  conference, 15–16 March 2018 Jesus College, Cambridge. The conference was part of the Science & Human Dimension Project.

AI Cambridge.png

Friday, 16 March 2018
13.30-14.25. Session 8 – AI in Sci-Fi Author Session
Chair: Prof Thore Husfeldt
Justina Robson
Dr Paul J. McAuley A brief history of encounters with things that think
Lavie Tidhar, Greek Gods, Potemkin AI and Alien Intelligence
Ian McDonald, The Quickness of Hand Deceives the AI

14.30-15.25 Session 9 – AGI Scenarios in Sci-Fi
Workshop lead by Prof Thore Husfeldt

The workshop consisted of me giving a brief introduction to taxonomies for superintelligence scenarios, adapted from Barrett and Baum (2017), Sotala (2018), Bostrom (2014), and Tegmark (2018). I then distributed the conference participants into 4 groups, led by authors Robson, McAuley, Tidhar, and McDonald. The groups were tasked with quickly filling each of these scenarios with as many fictionalisations as they could.

(Technical detail: Each group had access to a laptop and the workshop participants collaboratively edited a single on-line document, to prevent redundancy and avoid the usual round-robin group feedback part of such a workshop. This took some preparation but worked surprisingly well.)

This summary collates these suggestions, completed with hyperlinks to the relevant works, but otherwise unedited. I made no judgement calls about germaneness or artistic quality of the suggestions.

Overview

In a superintelligence scenario, our environment contains nonhuman agent exceeding human cognitive capabilities, including intelligence, reasoning, empathy, social skills, agency, etc. Not only does this agent exist (typically as a result of human engineering), it is unfettered and controls a significant part of the infrastucture, such as communication, production, or warfare.

The summary has three parts:

  1. Pathways: How did the Superintelligence come about?
  2. Containment failure: Given that the Superintelligence was constructed with some safety mechanisms in mind, how did it break free?
  3. Aftermaths: How does the world with Superintelligence look?

Part I: Pathways to Superintelligence

Most of the scenarios below describe speculative developments in which some other entity (or entities) than modern humans acquire the capability to think faster or better (or simply more) than us.

Network

In the first scenario, the Superintelligence emerges from networking a large number of electronic computers (which individually need not exhibit Superintelligence). This network can possibly include humans and entire organisations as its nodes.

Augmented human brains

Individual human have their brains are augmented, for instance by interfacing with an electronic computer. The result far exceeds the cognitive capabilities of a single human.

Better biological cognition

The genotype of some or all humans have has been changed, using eugenics or deliberate genome editing, selecting for higher intelligence that far surpasses modern Humans.

Brain emulation

The brains of individual humans are digitized and their neurological processes emulated on hardware that allows for higher processing speed, duplication, and better networking that biological brain tissue. Also called whole brain emulation, mind copying, just  uploading.

See also Mind Uploading in Fiction at Wikipedia.

Algorithms

Thanks to breakthroughs in symbolic artificial intelligence, machine learning, or artificial life, cognition (including agency, volition, explanation) has been algorithmicised and optimised, typically in an electronic computer.

Other

For most purposes, the arrival of alien intelligences has the same effect as the construction of a Superintelligence. Various other scenarios (mythological beings, magic) are operationally similar and have been fictionalised many times.

Continues in Part II: Failures.  Part III is forthcoming.


References

  • Anthony Michael Barrett, Seth D. Baum, A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis, 2016, http://arxiv.org/abs/1607.07730
  • Sotala, Kaj (2018). Disjunctive Scenarios of Catastrophic AI RiskAI Safety and Security (Roman Yampolskiy, ed.), CRC Press. Forthcoming.
  • Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014.
  • Max Tegmark, Life 3.0, Knopf, 2018.

Useless side plot with Rose and Finn both acting weird! (5,5)

Inspired, after a fashion, by The Last Jedi, Episode VIII of the Star Wars movies, here’s a cryptic crossword. Enjoy!

sw

As a PDF: no35

Across
1 Blaster Solo mostly nudged by mistake. (7)
5 Just in time, pervading magical force comes back, allowing clear dialogue. (7)
9 Glaring farce, bad editing without direction. (9)
10 Part of film containing origin of Ben Solo. (5)
11 Watch cyborg’s central turret structure. (6)
12 Kick oneself for taking side—postmodern garbage. (7)
14 Current leaders of rebellion investigate Sarlacc’s orifice. (4)
15 In retrospect, volcano remade operatic Star Wars charac- ter. (3,7)
19 Useless side plot with Rose and Finn both acting weird! (5,5)
20 Maybe white males immediately understand agenda. (4)
22 Expand General Grievous. (7)
25 Want Vader to embrace right side of Force. (6)
27 Acid dissolves badly-written minion without explanation, ultimately. (5)
28 Gender-bending ruined tale. Which function did Threepio serve? (9)
29 At the end of the day, Jedi virtues were misunderstood. (7)
30 Hostile alien’s stones. (7)

Down
1 Obi-Wan looks like one who boldy shows it. (4)
2 First off, contrarian lambasted storytelling. (9)
3 Perceptive newspaper journalist supports infuriating, unsubstantial review. (6)
4 They are filled with lifeless actors, like first prequel or clone wars. (9)
5 Cassian conjunctions. (5)
6 Boring writing without emotion ultimately sucks. (8)
7 Made snide remark about dumb Last Jedi. (5)
8 Period of mother-in-law’s hestitation after supreme leader finally confused two characters. (10)
13 Terrible direction without point or taste. (10)
16 Remove classic monster? (9)
17 Alt-right teen stuck in tirade; not salient. (9)
18 Problem for armored Stormtrooper: stop odor eruption. (8)
21 Pilot mostly followed by the Spanish fictional sibling. (6)
23 Dishonest apprentice girl embodies passive female principle. (5)
24 Return of Jedi’s second space endeavour. (5)
26 Wagers Count Snoke’s head. (4)