Hannah Arendt and Algorithmic Thoughtlessness

Presented at the London Conference on Critical Thought in June 2015

I want to warn of the possibility that algorithmic prediction will lead to the production of thoughtlessness, as characterised by Hannah Arendt.

This will come from the key characteristics of the algorithmic prediction produced by data science, such as the nature of machine learning and the role of correlation as opposed to causation. An important feature for this paper is that applying machine learning algorithms to big data can produce results that are opaque and not reversible to human reason. Nevertheless their predictions are being applied in ever-wider spheres of society leading inexorably to a rise in preemptive actions.

The many-dimensional character of the 'fit' that machine learning makes between the present and the future, using categories that are not static or coded by humans, has the potential for forms of algorithmic discrimination or redlining that can escape regulation. I give various examples of predictive algorithms at work in society, from employment through social services to predictive policing, and link this to an emerging govermentality that I described elsewhere as 'algorithmic states of exception' [1].

These changes have led to a rapid rise in discourse on the implications of predictive algorithms for ethics and accountability [2]. In this paper I consider in particular the concept of 'intent' that is central to most modern legal systems. Intent to do wrong is necessary for the commission of a crime and where this is absent, for whatever reason, we feel no crime has been committed. I draw on the work of Hannah Arendt and in particular her response to witnessing the trial of Adolf Eichmann in Jerusalem in 1961 [3] to illuminate the impact of algorithms on intent.

Arendt's efforts to comprehend her encounter with Eichmann led to her formulation of 'thoughtlessness' to characterise the ability of functionaries in the bureaucratic machine to participate in a genocidal process. I am concerned with assemblages of algorithmic prediction operating in everyday life and not with a regime intent on mass murder. However, I suggest that thoughtlessness, which is not a simple lack of awareness, is also a useful way to assess the operation of algorithmic governance with respect to the people enrolled in its activities.

One effect of this is to remove accountability for the actions of these algorithmic systems. Drawing on analysis of Arendt's work [4] I argue that the ability to judge is a necessary condition of justice; that legal judgement is founded on the fact that the sentence pronounced is one the accused would pass upon herself if she were prepared to view the matter from the perspective of the community of which she is a member. As we are unable to understand the judgement of the algorithms, which are opaque to us, the potential for accountability is excised. I also draw on recent scholarship to suggest that, due to the nature of algorithmic categorisation, critique of this situation is itself a challenge [5]. Taken together, these echo Arendt's conclusion that what she had witnessed had "brought to light the ruin of our categories of thought and standards of judgement".

However, Arendt's thought also offers a way to clamber out of this predicament through the action of unlearning. Her encounter with Eichmann was a shock; she expected to encounter a monster and instead encountered thoughtlessness. Faced with this she felt the need to start again, to think differently. A recent book by Marie Luise Knott describes this as unlearning, "breaking open and salvaging a traditional figure of thought and concluding that it has quite new and different things to say to us today" [6].

We need to unlearn machine learning. A practical way to do this through the application of participatory action research to the 'feature engineering' at the core of data science. I give analogous examples to support this approach and the overall claim that it is possible to radically transform the work that advanced computing does in the world.

[1] McQuillan, Daniel. 2015. Algorithmic States of Exception. European Journal of Cultural Studies, 18(4/5), ISSN 1367-5494

[2] Algorithms and Accountability Conference, Information Law Institute, New York University School of Law, February 28th, 2015.

[3] Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. 1 edition. New York, N.Y: Penguin Classics, 2006.

[4] Menke, C. & Walker, N.(2014). At the Brink of Law: Hannah Arendt’s Revision of the Judgement on Eichmann. Social Research: An International Quarterly 81(3), 585-611. The Johns Hopkins University Press.

[5] Antoinette Rouvroy. "The end(s) of critique : data-behaviourism vs. due-process." in Privacy, Due Process and the Computational Turn. Ed. Mireille Hildebrandt, Ekatarina De Vries, Routledge, 2012.

[6] Knott, Marie Luise, 2014. Unlearning with Hannah Arendt, New York: Other Press.

more ...