Rethinking AI through the politics of 1968

This talk was given at the conference 'Rethinking the legacy of 1968: Left fields and the quest for common ground' held at The Centre for Cultural Studies Research, University of East London on September 22nd 2018 http://rethinking1968.today/

There's a definite resonance between the agitprop of '68 and social media. Participants in the UCU strike earlier this year, for example, experienced Twitter as a platform for both affective solidarity and practical self-organisation1. However, there is a different geneaology that speaks directly to our current condition; that of systems theory and cybernetics. What happens when the struggle in the streets takes place in the smart city of sensors and data? Perhaps the revolution will not be televised, but it will certainly be subject to algorithmic analysis. Let's not forget that 1968 also saw the release of '2001: A Space Odyssey' featuring the AI supercomputer HAL.

While opposition to the Vietnam war was a rallying point for the movements of '68, the war itself was also notable for the application of systems analysis by US Secretary of Defense Robert McNamara, who attempted to make it, in modern parlance, a data-driven war. During the Vietnam war the hamlet pacification programme alone produced 90,000 pages of data and reports a month2, and the body count metric was published in the daily newspapers. The milieu that helped breed our current algorithmic dilemmas was the contemporaneous swirl of systems theory and cybernetics, ideas about emergent behaviour and experiments with computational reasoning, and the intermingling of military funding with the hippy visions of the Whole Earth Catalogue.

The double helix of DARPA and Silican Valley can be traced through the evolution of the web to the present day, where AI and machine learning are making inroads everywhere carrying their own narratives of revolutionary disruption; a Ho Chi Minh trail of predictive analytics. They are playing Go better than grand masters and preparing to drive everyone's car, while the media panics about AI taking our jobs. But this AI is nothing like HAL, it's a form of pattern finding based on mathematical minimisation; like a complex version of fitting a straight line to a set of points. These algorithms find the optimal solution when the input data is both plentiful and messy. Algorithms like backpropagation3 can find patterns in data that were intractable to analytical description, such as recognising human faces seen at different angles, in shadows and with occlusions. The algorithms of Ai crunch the correlations and the results often work uncannily well.

But it's still computers doing what computers have been good at since the days of vacuum tubes; performing mathematical calculations more quickly than us. Thanks to algorithms like neural networks this calculative power can learn to emulate us in ways we would never have guessed at. This learning can be applied to any context that is boiled down to a set of numbers, such that the features of each example are reduced to a row of digits between zero and one and are labelled by a target outcome. The datasets end up looking pretty much the same whether it's cancer scans or netflix viewing figures. There's nothing going on inside except maths; no self-awareness and no assimilation of embodied experience. These machines can develop their own unprogrammed behaviours but utterly lack an understanding of whether what they've learned makes sense. And yet, machine learning and AI are becoming the mechanisms of modern reasoning, bringing with them the kind of dualism that the philosophy of '68 was set against, a belief in a hidden layer of reality which is ontologically superior and expressed mathematically4.

The delphic accuracy of AI comes with built-in opacity because massively parallel calculations can't always be reversed to human reasoning, while at the same time it will happily regurgitate society's prejudices when trained on raw social data. It's also mathematically impossible to design an algorithm be fair to all groups at the same time5. For example, if the reoffending base rates vary by ethnicity, a recidivism algorithm like COMPAS will predict different numbers of false positives and more black people will be unfairly refused bail6. The wider impact comes from the way the algorithms proliferate social categorisations such as 'troubled family' or 'student likely to underachieve', fractalising social binaries wherever they divide into 'is' and 'is not'. This isn't only a matter of data dividuals misrepresenting our authentic selves but of technologies of the self that, through repetition, produce subjects and act on them. And, as AI analysis starts overcode MRI scans to force psychosocial symptoms back into the brain, we will even see algorithms play a part in the becoming of our bodies7.

What we call AI, that is, machine learning acting in the world, is actually a political technology in the broadest sense. Yet under the cover of algorithmic claims to objectivity, neutrality and universality there's an infrastructual switch of allegiance to algorithmic governance. The dialectic that drives AI into the heart of the system is the contradiction of societies that are data rich but subject to austerity. One need only look at the recent announcements about a brave new NHS to see the fervour welcoming this salvation8. While the global financial crisis is manufactured, the restructuring is real; algorithms are being enrolled in the refiguring of work and social relations such that precarious employment depends on satisfying algorithmic demands9 and the public sphere exists inside a targeted attention economy.

Algorithms and machine learning are coming to act in the way pithily described by Pierre Bourdieu, as structured structures predisposed to function as structuring structures10, such that they become absorbed by us as habits, attitudes, and pre-reflexive behaviours. In fact, like global warming, AI has become a hyperobject11 so massive that its totality is not realised in any local manifestation, a higher dimensional entity that adheres to anything it touches, whatever the resistance, and which is percieved by us through its informational imprints. A key imprint of machine learning is its predictive power. Having learned both the gross and subtle elements of a pattern it can be applied to new data to predict which outcome is most likely, whether that is a purchasing decision or a terrorist attack. This leads ineluctably to the logic of preemption in any social field where data exists, which is every social field, so algorithms are predicting which prisoners should be given parole and which parents are likely to abuse their children1213.

We should bear in mind that the logic of these analytics is correlation. It's purely pattern matching not the revelation of a causal mechanism, so enforcing the foreclosure of alternative futures becomes effect without cause. The computational boundaries that classify the input data map outwards as cybernetic exclusions, implementing continuous forms of what Agamben calls states of exception. The internal imperative of all machine learning, which is to optimise the fit of the generated function, is entrained within a process of social and economic optimisation, fusing marketing and military strategies through the unitary activity of targeting.

A society who's synapses have been replaced by neural networks will generally tend to a heightened version of the status quo. Machine learning by itself cannot learn a new system of social patterns, only pump up the existing ones as computationally eternal. Moreover, the weight of those amplified effects will fall on the most data visible i.e. the poor and marginalised. The net effect being, as the book title says, the automation of inequality14. But at the very moment when the tech has emerged to fully automate neoliberalism the wider system has lost it's best-of-all-possible-worlds authority, and racist authoritarianism mestastasizes across the veneer of democracy. The opacity of algorithmic classifications already have the tendency to evade due process, never mind when the levers of mass correlation are at the disposal of ideologies based on paranoid conspiracy theories. A common core to all forms of fascism is a rebirth of the nation from its present decadence, and a mobilisation to deal with those parts of the population that are the contamination15. The automated identification of anomalies is exactly what machine learning is good at, at the same time as promoting the kind of thoughtlessness that Arendt identified in Eichmann.

So much for the intensification of authoritarian tendencies by AI. What of resistance? Dissident Google staff forced them to partly drop project Maven16, which develops drone targeting, and Amazon workers are campaigning against the sale of facial recognition systems to the government. But these workers are the privileged guilds of modern tech; this isn't a return of working class power. In the UK and USA there's a general institutional push for ethical AI, in fact you can't move for initiatives aiming to add ethics to algorithms17, but i suspect this is mainly preemptive PR to head off people's growing unease about their coming AI overlords. All the initiatives that want to make AI ethical seem to think it's about adding something i.e. ethics, instead of about revealing the value-laden-ness at every level of computation, right down to the mathematics.

Models of radical democratic practice offer a more political response through structures such as people's councils composed of those directly affected, mobilising what Donna Haraway calls situated knowledges through horizontalism and direct democracy18. While these are valid modes of resistance, there's also the '68 notion from groups like the Situationists that the Spectacle generates the potential for it's own supersession19. I'd suggest that the self-subverting quality in AI is its latent surrealism. For example, experiments to figure out how image recognition actually works probed the contents of intermediary layers in the neural networks, and by recursively applying filters to these outputs produced hallucinatory images that are straight out of an acid trip, such as snail-dogs and trees made entirely of eyes20. When people deliberately feed AI the wrong kind of data it makes surreal classifications. It's a lot of fun, and can even make art that gets shown in galleries21 but, like the Situationist derive through the Harz region of Germany while blindly following a map of London, it can also be a poetic disorientation that coaxes us out of our habitual categories.

While businesses and bureaucracies apply AI to the most serious contexts to make or save money or, through some miracle of machinic objectivity, solve society's toughest problems, its liberatory potential is actually ludic. It should be used playfully instead of abused as a form of prophecy. But playfully serious, like the tactics of the Situationists themselves, a disordering of the senses to reveal the possibilities hidden by the dead weight of commodification. Reactivating the demands of the social movements of '68 that work becomes play, the useful becomes the good, and life itself becomes art.

At this point in time, where our futures are becoming cut off by algorithmic preemption we need to pursue a political philosophy that was embraced in '68 of living the new society through authentic action in the here and now. A counterculture of AI must be based on immediacy. The struggle in the streets must go hand in hand with a detournement of machine learning; one that seeks authentic decentralisation not Uber-ised serfdom, and federated horizontalism not the invisible nudges of algorithmic governance. We want a fun yet anti-fascist AI, so we can say "beneath the backpropagation, the beach!"22.


  1. Kobie, Nicole. ‘#NoCapitulation: How One Hashtag Saved the UK University Strike’. Wired UK 18 Mar. 2018. https://www.wired.co.uk/article/no-capitulation-uk-university-pension-protest-twitter

  2. Thayer, Thomas C. A Systems Analysis View of the Vietnam War: 1965-1972. Volume 2. Forces and Manpower. 1975. www.dtic.mil. http://www.dtic.mil/docs/citations/ADA051609

  3. 3Blue1Brown. What Is Backpropagation Really Doing? | Deep Learning, Chapter 3. N.p. Film. https://www.youtube.com/watch?v=Ilg3gGewQ5U 

  4. McQuillan, Dan. ‘Data Science as Machinic Neoplatonism’. Philosophy & Technology (2017): 1–20. https://link.springer.com/article/10.1007/s13347-017-0273-3 

  5. Narayanan, Arvind. Tutorial: 21 Fairness Definitions and Their Politics. N.p. https://www.youtube.com/watch?v=jIXIuYdnyyk 

  6. Corbett-Davies, Sam et al. ‘A Computer Program Used for Bail and Sentencing Decisions Was Labeled Biased against Blacks. It’s Actually Not That Clear.’ Washington Post 17 Oct. 2016. https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/

  7. Resnick, Brian. ‘Treating Depression Is Guesswork. Psychiatrists Are Beginning to Crack the Code.’ Vox. N.p., 4 Apr. 2017. https://www.vox.com/science-and-health/2017/4/4/15073652/precision-psychiatry-depression

  8. Department of Health and Social Care. ‘Matt Hancock: New Technology Is Key to Making NHS the World’s Best’. GOV.UK. N.p., 6 Sept. 2018. https://www.gov.uk/government/news/matt-hancock-new-technology-is-key-to-making-nhs-the-worlds-best

  9. O’Connor, Sarah. ‘When Your Boss Is an Algorithm’. Financial Times. N.p., 8 Sept. 2016. https://www.ft.com/content/88fdc58e-754f-11e6-b60a-de4532d5ea35

  10. Bourdieu, Pierre. The Logic of Practice. p53. Stanford University Press, 1990.  

  11. Morton, Timothy. Hyperobjects - Philosophy and Ecology after the End of the World. University Of Minnesota Press, 2013. https://www.upress.umn.edu/book-division/books/hyperobjects

  12. Keddell, Emily. ‘Predictive Risk Modelling: On Rights, Data and Politics.’ Re-Imagining Social Work in Aotearoa New Zealand 4 June 2015.. http://www.reimaginingsocialwork.nz/2015/06/predictive-risk-modelling-on-rights-data-and-politics/

  13. McIntyre, Niamh, and David Pegg. ‘Councils Use 377,000 People’s Data in Efforts to Predict Child Abuse’. The Guardian 16 Sept. 2018. www.theguardian.com. https://www.theguardian.com/society/2018/sep/16/councils-use-377000-peoples-data-in-efforts-to-predict-child-abuse

  14. Eubanks, Virginia. ‘A Child Abuse Prediction Model Fails Poor Families’. Wired 15 Jan. 2018. https://www.wired.com/story/excerpt-from-automating-inequality/

  15. iGriffin, Roger. ‘The Palingenetic Core of Fascist Ideology’. Library of Social Science. N.p., n.d. https://www.libraryofsocialscience.com/ideologies/resources/griffin-the-palingenetic-core/

  16. Shane, Scott, Cade Metz, and Daisuke Wakabayashi. ‘How a Pentagon Contract Became an Identity Crisis for Google’. The New York Times 30 July 2018. https://www.nytimes.com/2018/05/30/technology/google-project-maven-pentagon.html

  17. Department for Digital, Culture, Media & Sport. ‘Consultation on the Centre for Data Ethics and Innovation’. GOV.UK. N.p., 13 June 2018. https://www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-and-innovation

  18. McQuillan, Dan. ‘People’s Councils for Ethical Machine Learning’. Social Media + Society 4.2 (2018): 2056305118768303. SAGE Journals. https://doi.org/10.1177/2056305118768303 

  19. Plant, Sadie. The Most Radical Gesture: The Situationist International in a Postmodern Age. Routledge, 1992. 

  20. Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. ‘Inceptionism: Going Deeper into Neural Networks’. Research Blog 17 June 2015. http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html

  21. Akten, Memo. ‘Learning to See’. Memo Akten. 2018. http://www.memo.tv/portfolio/learning-to-see/

  22. Marriott, Red. ‘Slogans of 68’. libcom.org. N.p., 30 Apr. 2008. http://libcom.org/history/slogans-68

permalink ...