The Manifesto on Algorithmic Humanitarianism was presented at the symposium on 'Reimagining Digital Humanitarianism', Goldsmiths, University of London, Feb 16th 2018.
intro
- humanitarian organisations will adopt ai because it seems able to answer questions at the heart of humanitarianism
- such as 'who should we save?' and 'how can we be effective at scale?'
- it resonates strongly with existing modes of humanitarian thinking and doing
-
in particular the principles of neutrality and universality
-
the way machine learning consumes big data and produces predictions
- suggests it can both grasp the enormity of the humanitarian challenge and provide a data-driven response
- but the nature of machine learning operations mean they will actually deepen some humanitarian problematics
- and introduce new ones of their own
- thinking about how to avoid this raises wider questions about emancipatory technics
- and what else needs to be in place to produce machine learning for the people
maths
- there is no intelligence in artificial intelligence
- nor does it really learn, even though it's technical name is machine learning
- it is simply mathematical minimisation
-
like at school, fitting a straight line to a set of points
-
you pick the line that minimises the differences overall
- machine learning does the same for complex patterns
- it fits input features to known outcomes by minimising a cost function
- the fit is a model that can be applied to new data to predict the outcome
- the most influential class of machine learning algorithms are neural networks
- which is what startups call 'deep learning'
- they use backpropagation: a minimisation algorithm that produces weights in different layers of neurons
- anything that can be reduced to numbers and tagged with an outcome can be used to train a model
- the equations don't know or care if the numbers represent amazon sales or earthquake victims
- this banality of machine learning is also it's power
- it's a generalised numerical compression of questions that matter
- there is no comprehensions within the computation
- the patterns are correlation not causation
- the only intelligence comes in the same sense as military intelligence; that is, targeting
- but models produced by machine learning can be hard to reverse into human reasoning
- why did it pick this person as a bad parole risk? what does that pattern of weights in the 3rd layer represent? we can't necessarily say.
reasoning
- machine learning doesn't just make decisions without giving reasons, it modifies our very idea of reason
- that is, it changes what is knowable and what is understood as real
- it operationalises the two-world metaphysics of neoplatonism
- that behind the world of the sensible is the world of the form or the idea.
- a belief in hidden layer of reality which is ontologically superior,
- expressed mathematically and apprehended by going against direct experience.
-
machine learning is not just a method but a machinic philosophy
-
what might this mean for the future field of humanitarian ai?
- it makes machine learning prone to what miranda fricker calls epistemic injustice
- she meant the social prejudice that undermines a speaker's word
- but in this case it's the calculations of data science that can end up counting more than testimony
- the production of opaque predictions with calculative authority
- will deepen the self-referential nature of the humanitarian field
- while providing a gloss of grounded and testable interventions
- testing against unused data will produce hard numbers for accuracy and error
- while making the reasoning behind them inaccessible to debate or questioning
- using neural networks will align with the output driven focus of the logframe
- while deepening the disconnect between outputs and wider values
- hannah arendt said many years ago that cycles of social reproduction have the character of automatism.
- the general threat of ai, in humanitarianism and elsewhere, is not the substitution of humans by machines but the computational extension of existing social automatism
production
- of course the humanitarian field is not naive about the perils of datafication
- we all know machine learning could propagate discrimination because it learns from social data
- humanitarian institutions will be more careful than most to ensure all possible safeguards against biased training data
- but the deeper effect of machine learning is to produce new subjects and to act on them
-
machine learning is performative, in the sense that reiterative statements produce the phenomena they regulate
-
humanitarian ai will optimise the impact of limited resources applied to nearly limitless need
- by constructing populations that fit the needs of humanitarian organisations
- this is machine learning as biopower
- it's predictive power will hold out the promise of saving lives
- producing a shift to preemption
- but this is effect without cause
- the foreclosure of futures on the basis of correlation rather than causation
- it constructs risk in the same way that twitter determines trending topics
- the result will be algorithmic states of exception
- according to agamben, the signature of a state of exception is ‘force-of’
- actions that have the force of law even when not of the law
- logistic regression and neural networks generate mathematical boundaries
- but cybernetic exclusions will have effective force by allocating and witholding resources
- a process that can't be humanised by having a humanitarian-in-the-loop
- because it is already a technics, a co-constituting of the human and the technical
decolonial
- the capture, model and preempt cycle of machine learning will amplify the colonial aspects of humanitarianism
- unless we can develop a decolonial approach to its assertions of objectivity, neutrality and universality
- we can look to standpoint theory, a feminist and post-colonial approach to science
- which suggests that positions of social and political disadvantage can become sites of analytical advantage
-
this is where our thinking about machine learning & ai should start from
-
but i don't mean by soliciting feedback from humanitarian beneficiaries
- participation and feedback is already a form of socialising subjects
- and with algorithmic humanitarianism every client interaction will be subsumed into training data
- they used to say 'if the product is free, you are the product'
- but now, if the product is free, you are the training data
- training for humanitarian ai and for the wider cybernetic governance of resilient populations
- machine learning can break out of this spiral through situated knowledge
- as proposed by donna haraway as a counterweight to the scientific ‘view from nowhere’,
- a situated approach that is not optional in its commitment to a particular context
- how does machine learning look from the standpoint of Haiti's post-earthquake rubble or from an IDP camp
- no refugee in a freezing factory near the serbian border with croatia is going to be signing up for andrew ng's mooc on machine learning any time soon
- how can democratic technics be grounded in the humanitarian context?
people's councils
- it may seem obvious that if machine learning can optimise ocado deliveries then it can help with humanitarian aid
- but the politics of machine learning are processes operating at the level of the pre-social
-
one way to counter this is through popular assemblies and people's councils
-
bottom-up, confederated structures that implement direct democracy
- replacing the absence of a subject in the algorithms with face-to-face presence
- contesting the opacity of parallel computation with open argument
- and the environmentality of algorithms with direct action
- the role of people's councils is not to debate for its own sake
- but the creation of alternative structures, in the spirit of gustav landauer's structural renewal
- an emancipatory technics is one that co-constitutes active agents and their infrastructures
- as Landauer said, people must 'grow into a framework, a sense of belonging, a body with countless organs and sections'
- as evidenced in calais, where people collectively organised wharehouse space, van deliveries and cauldrons to cook for 100s, while regularly tasting tear gas
- i suggest that solidarity is an ontological category prior to subject formation
- collective activity is the line of flight from a technological capture that extends market relations to our intentions
- it is a politics of becoming - a means without end to counter ai's effect without cause
close
- in conclusion
- as things stand, machine learning and so-called ai will not be any kind of salvation for humanitarianism
- but will deepen the neocolonial and neoliberal dynamics of humanitarian institutions
- but no apparatus is a closed teleological system; the impact of machine learning is contingent and can be changed
- it's not a question people versus machines but of a humanitarian technics of mutual aid
- in my opinion this requires a rupture with current instantiations of machine learning
- a break with the established order of things of the kind that badiou refers to as an event
-
the unpredictable point of excess that makes a new truth discernible
-
and constitutes the subjects that can pursue that new truth procedure
- the prerequisites will be to have a standpoint, to be situated, and to be committed
- it will be as different to the operations of google as the balkan aid convoys of the 1990s were to the work of the icrc
- on the other hand, if an alternative technics is not mobilised,
- the next generation of humanitarian scandals will be driven by ai