A Glimpse of Algorithmic Fairness

Workshop presentation at Ethical, legal & social consequences of artificial intelligence, Network for Artificial Intelligence and Machine Learning at Lund University (AIML@LU), Lund University, 22 November 2018.


Several recent results in algorithms address questions of algorithmic fairness — how can fairness be axiomatised and measured, to which extent can bias in data capture or decision making be identified and remedied, how can different conceptualisations of fairness be aligned, which ones can be simultaneously satisfied. What can be done, and what are the logical and computational limits?

I give a very brief overview of some recent results in the field aimed at an audience assumed to be innocent of algorithmic thinking. The presentation includes a brief description of the location of the field algorithms among other disciplines, and the mindset of algorithmic or computational thinking. The talk includes pretty shapes that move about in order to communicate some intuition about the results, but is otherwise unapologetic about the fact that the arguments are ultimately formal and precise, which is important for addressing fairness in a transparent and accountable fashion.


Toon Calders, Sicco Verwer: Three naive Bayes approaches for discrimination-free classification. Data Min. Knowl. Discov. 21(2): 277-292 (2010). [PDF at author web page]

Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. [arXiv 1703.00056]

Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard S. Zemel:
Fairness through awareness. Innovations in Theoretical Computer Science 2012: 214-226. [arXiv 1104:3913]

Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, Suresh Venkatasubramanian: Certifying and Removing Disparate Impact. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015. [arXiv 1412.3756]

Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian: On the (im)possibility of fairness. [arXiv:1609.07236]

Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum: Multicalibration: Calibration for the (Computationally-Identifiable) Masses. Int. Conf. Machine Learning 2018: 1944-1953. [Proceedings PDF]

Jon M. Kleinberg, Sendhil Mullainathan, Manish Raghavan: Inherent Trade-Offs in the Fair Determination of Risk Scores. Innovations in Theoretical Computer Science 2017: 43:1-43:23. [arXiv 1609:05807]

(The image at the top, the title slide of my presentation, shows a masterpiece of the early Renaissance, Fra Angelico’s The Last Judgement (ca. 1430), illustrating a binary classifier with perfect data access and unlimited computational power.)

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s