dan mcquillan

lecturer in creative & social computing

Nov 07, 2015

Ghosts in the Algorithmic Resilience Machine

My speaker notes from the panel on 'Resilience and the Future of Democracy in the Smart City' at the 25th anniversary conference of the Centre for the Study of Democracy, University of Westminster, 7th Nov 2015.


I want to start by looking at what resilience and the smart city have in common. The idea of resilience comes from Holling's original 1973 paper on ecological systems. He was looking at the balance of predator and prey, and replaced the simple idea of dynamic equilibrium with abstract concepts draw from systems theory & cybernetics. Complex systems have multiple equilibriums, and movement between these is not a collapse of the system but rather an adaptive cycle. So the population of antelope drooping by 80% is not necessarily a catastrophe, but an adaptive shift. The system persists, although in a changed form.


What does this have to do with the smart city? In it's current incarnation, the smart city appears as pervasive computation in the urban fabric, driven by the twin goals of efficiency and environmental sustainability. It posits continuous adaptation through a cycle of sensing-computation-actuation. Heterogeneous data streams from sensors are processed in to a dashboard of metrics that triggers automated changes; so, for example, speed limits and traffic lights are manipulated to modify car emissions in near real-time. The new model of the smart city explicitly includes the participation of citizens as sensing nodes. Continuous adaptations are made to optimise flows with respect to the higher parameters of smoothness and greenness. The smart city is multi-dimensional complex system constantly moving between temporary states of equilibrium. It is a manifestation of high-frequency resilience.



But resilience means more than systems ecology. It has outgrown it's origins to become a governing idea in a time of permanent crisis. As a form of governmentality [] it constitutes us as resilient populations and demands adaptation to emergencies of whatever kind, whether it's finance, envirnonment or security. In practice, the main engine of resilience is through accelerated conversion of everything to Hayek's self-organising complexity of markets, with military intervention at the peripheries where this resisted.


If resilience is the mode of crisis governance and the smart city is a form of high-frequency resilience, what does the smart city mean for democracy? To understand the implications for the future of democracy, I want to look at the emerging mode of production through which both wider resilience and specifically the smart city are being produced; that is, through the algorithmic production of preemption.


We're all becoming familiar with the idea that contemporary life generates streams of big data that are drawn through the analytic sieve of datamining and machine learning. Meaning is assigned through finding clusters, correlations and anomalies that can be used to make predictions. While its original commercial application was to predict the next set of supermarket purchases, the potential for prediction has become addictive for sectors whose main focus is risk. While algorithmic preemption drives both high-frequency trading and drone strikes, it has also spread to the more mundane areas of everyday life.


In the same way that airline websites use your online data profile to tweak the ticket prices that you see, algorithmic prediction leads to preemptive interventions in social processes. One example is human resources departments, where it's used to predicts which employees will be the next to leave. Or in company health insurance, where staff wear Fitbits and pay insurance premiums based on predicted future health. In New Zealand, the government commissioned algorithms to predict which families are likely to abuse their children, based on data available in the first week after birth. And in some US states police stop and search is targeted by prediction software like PredPol.


This preemption forecloses possible futures in favour of the preferred outcome. The smart city will be a concentrated vessel for algorithmic preemption and, because of this, it will be a machine for disassembling due process.


This year in the UK there's been a big fuss about the 800th anniversary of the signing of the Magna Carta ('the Great Charter'). The principle of due process in law is expressed in Clause 39 of the Magna Carta: "No free man shall be seized or imprisoned, or stripped of his rights or possessions, or outlawed or exiled, or deprived of his standing in any way, nor will we proceed with force against him, or send others to do so, except by the lawful judgment of his equals or by the law of the land."


But so much of this is potentially shredded by the smart city; the constant contact with algorithmic systems that can influence the friction or direction of our experience opens the space for prejudicial and discriminatory actions that escape oversight.



The characteristics of algorithmic preemption that disassemble due process include the high frequency and often invisible nature of the resilience adaptations. But also because, unlike science, algorithmic preemption make no claim to causal explanation. It simply predicts through patterns, and the derivation of those patterns through abstraction and parallel calculation at scale is opaque to human reasoning. Therefore the preemptions of big data are not understandable as intent nor accountable to 'the judgement of peers'.


Algorithmic productive force avoids causality, evades accountability, and restrict agency to participation and adaptation. To be honest, things are not looking good...


But general computation doesn't predetermine the kinds of patterns that are produced. The network protocols are open, and the ability to take advantage of code is not limited to the powerful. The question is, if there are other possibilities, how can we envision them? If enthusiastic communities participating in bottom up citizen sensing using accessible tech can be assimilated in to the resilience of the smart city, as they can, where do we look for forms of social recomposition that combine community and computation for a real alternative?


I think this is where the ghost of Gustav Landauer arises to guide us. His most famous dictum was first published in “Schwache Staatsmänner, Schwächeres Volk!” in 1910: “The State is a condition, a certain relationship between human beings, a mode of behaviour; we destroy it by contracting other relationships, by behaving differently toward one another… We are the State and we shall continue to be the State until we have created the institutions that form a real community.” You can't smash the state as an external thing, it is this networked relational form.


But the smart city is also a networked relational form. The relations span people, devices and infrastructures, with patterns of relationships modulated by algorithms. Can we use algorithms to contract other forms of relationship? Here, another distinctive aspect of Landauer’s politics becomes applicable. He said that rather than toppling the state, you have to overcome capital by leaving the current order. This is precisely the possibility raised by some current experiments in political prototyping through technology.


The one i want to look at is the blockchain, which is the technology behind Bitcoin. Bitcoin itself dispenses with the need for a central bank through having distributed ledger of transactions. These transactions can be trusted because of an algorithmic mechanism called 'proof of work' which is basically incorruptible because it's implemented through a cryptographic hashing function. The underlying mechanism is distributed, trustable records that don't require a centralised authority.


Many people are now looking at role that distributed, trustable records could play beyond cryptocurrencies, through forms of so-called smart contracts. This is where the blockchain could become a protocol for parallel structures.
Many people are now looking at role that distributed, trustable records could play beyond cryptocurrencies, through forms of so-called smart contracts.


Smart contracts enable, for example, decentralized autonomous organizations (DAOs). A DAO involves people collaborating with each other via processes recorded incorruptibly on the blockchain. While a lot of the speculation around smart contracts is libertarian, I agree with David Bollier's assessment that they also hold out the prospect for commons-based systems. A smart contract would straight away deal with issues such as the free rider problem, a.k.a. the tragedy of the commons. As the well-known hacker Jaromil, who works on a fork of bitcoin called Freecoin, says: "Bitcoin is not really about the loss of power of a few governments, but about the possibility for many more people to experiment with the building of new constituencies." It seems there could be prefigurative politics in these protocols.


One project implementing Freecoin is the Helsinki Urban Co-operative Farm. This is a community-supported agriculture project, where people collectively hire a grower but where participants can also volunteer to work in the fields. The agreement id that each member does at least 10 hours of work per year and there lots of other admin & logistical tasks that have to be done. The complex transaction types and numbers are becoming an issue for the collective, and the plan is for Freecoin to be a decentralized & transparent way to track & reward contributions, maintaing self-governance and avoiding the need to create a centralised institution.


Although this is only one small example of the application of the blockchain to common-pool resources, it is an eerie echo of Landauer, who's practical politics focused on communes for the collective production of food and other necessities. Overall, I'm suggesting that through technologies like the blockchain, Landauer's approach of leaving rather than confronting, reconstituting sets of relationships, and concentrating on common production, could be the Other of the Smart City.


Let me finish by returning to the topic of this panel: resilience and the future of democracy in the smart city. I think the current direction of travel, based on algorithmic preemption, is towards the post-democratic forms of neoliberal resilience. But it may be that the consequent creation of highly computational infrastructures is also an opening for decentralised autonomous organisation, enabling us to 'occupy' computation and implement a kind of exodus (in the spirit of Gustav Landauer) to more federal-communitarian forms supported by a protocols of commonality.


Thank you

Jul 15, 2015

Data Science and Phrenology

[ABSTRACT FOR A PAPER IN PREPARATION: comments welcome...]

In this paper I look at data science through the historical lens of phrenology. I take data science seriously in it's claim to be a science, and examine its parallels with the methodological and social trajectories of phrenology as a scientific discourse. My aim is not to dismiss data science as pseudo-science but to explore the interplay of empirical and social factors in both phrenology and data science, as ways of making meaning about the world. By staying close to the practical techniques at the same time as reading them within their historical contexts, I attempt some grounded speculations about the political choices facing data science & machine learning.

In contrast to the philosophy and anatomy of the early nineteenth century, phrenology offered a plausible account of the connection between the mind and the brain by asserting that 'the brain is the organ of the mind'. Phrenologists believed that the brain is made up of a number of separate organs, each related to a distinct mental function, and the size of each organ is a measure of the power of its associated faculty. There were understood to be thirty seven faculties including Amativeness, Philoprogenitiveness, Veneration and Wit. The operations of phrenology were based on assessing the correlation between the topology of the skull and the underlying faculties, whose influence corresponded to size and therefore the specific shape of the head. It was used as a predictive empirical tool, for example to assist in the choice of servant.

The data science that is emerging in the second decade of the twenty-first century offers a plausible connection between the flood of big data and models that can say something meaningful about the world. The most widely used methods in data science can be grouped under the broad label of machine learning. In machine learning, algorithmic clustering and correlation are used to find patterns in the data that are 'interesting' in that they are both novel and potentially useful [1]. This discovery of a functional fit to existing data, involving an arbitrary number of variables, enables the predictive work that data science is doing in the world. While data mining was originally used to predict patterns of supermarket purchases, the potential to pre-empt risk factors is leading to the wide application of data science across areas such as health, social policy and anti-terrorism.

The newly developed technique of phrenology was most actively studied in Britain in the years 1810-1840. One of the factors that made it popular was the accessibility of the method to non-experts. For leading exponents such as George Combe it was a key principle that people were able to learn the methods and test them in practice: 'observe nature for yourselves, and prove by your own repeated observations the truth or falsehood of phrenology'. Some historians, such as Steven Shapin, have interpreted British phrenology as a social challenge to the the elitist control of knowledge generation, with a corresponding commitment to broadening the base of participation [2]. Shapin saw this as evidence that social factors as well as intrinsic intellectual factors help explain the work done by early phrenology, which 'enabled the participation in scientific culture of previously excluded social groups'.

A stronghold of historical phrenology in Britain was Edinburgh, where it was strongly associated with a social reformist agenda. Phrenologists there believed that the assessment of character from the shape of the skull was not the final word but a starting point for self and social improvement, because 'environmental influences could be 'brought to bear to stir one faculty into greater activity or offset the undesirable hyper-development of another. Not just the size but the tone of the organ was responsible for the degree to which its possessor manifested that behaviour' [3]. Advocates of phrenology such as Mackenzie asserted that 'until mental philosophy improves, society will not improve' and many felt that their science should influence policies on broad social issues such as penal reform and the education of the working classes.

As it stands now, data science is a highly specialised activity restricted to a narrow group of participants. The fact that data science is seen as a strategic expertise, combined with the small number of trained practitioners, has led to the demand far outstripping the supply of data scientists and its identification by the Harvard Business Review as 'the sexiest job of the 21st Century'. Most data scientists outside of academia are employed either by large corporations and financial institutions or by entrepreneurial start-ups. In terms of its social and cultural positioning, data science as we know it is a hegemonic activity.

Using the predictions of data science to drive pre-emptive interventions is also seen as having a social role. However, the form of these social interventions is shaped by the actors who are in a position to deploy data science. The characterisation of data science as a tool of the powerful derives not only from the algorithmic determination of parole conditions or predictive policing, but from its embedding within a hegemonic world view. The forms of algorithmic regulation promoted by people like Tim O'Reilly have become algorithmic governance. Predictive filtering dovetails with the 'fast policy' of behavioural insight teams, as they craft policy changes to choice architecture of everyday life.

In the 1840s phrenology ran in to problems, with increasingly successful empirical challenges to its validity. In particular, critics questioned whether the external surface of skull faithfully represented the shape of the brain underneath. If not, as came to be accepted, phrenology could no longer claim a correspondence between observations of the skull and the faculties of the individual. Supporters continued to defend phrenology on the basis of its utility rather than using measurement as a criteria: 'we have often said that Phrenology is either the most practically useful of sciences or it is not true'. But by the mid 19th century both specific objections and the general advance of the scientific method left phrenology discredited.

Unfortunately, phrenology underwent a revival in the late C19th and early C20th as part of a broad set of ideas known as scientific racism. This field of activity used scientific techniques such as craniometry (volumetric measurements of the skull) to support a belief in racial superiority; 'proposing anthropologic typologies supporting the classification of human populations into physically discrete human races, that might be asserted to be superior or inferior'. It was used in justifying racism and other narratives of racial difference in the service of European colonialism; for example, during the 1930s Belgian colonial authorities in Rwanda used phrenology to explain the so-called superiority of Tutsis over Hutus.

In 1950 UNESCO statement on race formally denounced scientific racism, saying "For all practical social purposes 'race' is not so much a biological phenomenon as a social myth. The myth of 'race' has created an enormous amount of human and social damage." However, the concept of race has been re-mobilised inside genomics, one of the crucibles of data science. Rather than Human Genome Project closing the door on the idea of race having a biological foundation, as many had hoped, some studies suggest that 'racial population difference became a key variable in studying the existence and meaning of difference and variation at the genetic level'.

The jury is still out on the long term validity of data science as an empirical method of understanding the world. Certainly there is a growing critique, largely based on privacy and ethics but also on the substitution of correlation for causation and the over-arching idea that metrics can be a proxy for meaning. I have written elsewhere about the potential already immanent in algorithmic governance to produce multiple states of exception [4]. However, my purpose here is a different one; to see the unfolding path of data science as propelled by both methodological and social factors and to use the completed trajectory of phrenology as a heuristic comparison.

Instead of being disheartened that, despite the bigness of data and the sophistication of machine learning algorithms, empirical activity is still imbricated with social values, we should recognise this as a continuing historical dynamic. This can be mobilised explicitly to offer a more hopeful future for data science and machine learning than one that derives only from the financial or governmental hegemony. Like the phrenologists of nineteenth century Edinburgh, we can choose to see in the methodologies of machine learning the opportunity to increase participation and social fairness. This can be imagined, for example, though the application of participatory action research to the process of data science. As Mackenzie wrote about phrenology "the most effectual method" (of error checking) was "to multiply, as far as possible, the number of those who can observe and judge". It is as yet a largely unexplored research question to ask how data science can be democratic, and how we can develop a machine learning for the people.

[1] Han, Jiawei, Micheline Kamber, and Jian Pei. Data mining: concepts and techniques: concepts and techniques. Elsevier, 2011.

[2] Shapin, Steven. "Phrenological knowledge and the social structure of early nineteenth-century Edinburgh." Annals of Science 32.3 (1975): 219-243.

[3] Cantor, Geoffrey N. "The Edinburgh phrenology debate: 1803–1828." Annals of Science 32.3 (1975): 195-218.

[4] McQuillan, Dan. ‘Algorithmic States of Exception’. European Journal of Cultural Studies 18.4-5 (2015): 564–576. ecs.sagepub.com.

Jun 29, 2015

Hannah Arendt and Algorithmic Thoughtlessness

[ABSTRACT FOR A PAPER IN PREPARATION: comments welcome]

In this paper I warn of the possibility that algorithmic prediction will lead to the production of thoughtlessness, as characterised by Hannah Arendt.

I set out by describing key characteristics of the algorithmic prediction produced by data science such as the nature of machine learning and the role of correlation as opposed to causation. An important feature for this paper is that applying machine learning algorithms to big data can produce results that are opaque and not reversible to human reason. Nevertheless their predictions are being applied in ever-wider spheres of society leading inexorably to a rise in preemptive actions.

I suggest that the many-dimensional character of the 'fit' that machine learning makes between the present and the future, using categories that are not static or coded by humans, has the potential for forms of algorithmic discrimination or redlining that can escape regulation. I give various examples of predictive algorithms at work in society, from employment through social services to predictive policing, and link this to an emerging govermentality that I described elsewhere as 'algorithmic states of exception' [1].

These changes have led to a rapid rise in discourse on the implications of predictive algorithms for ethics and accountability [2]. In this paper I consider in particular the concept of 'intent' that is central to most modern legal systems. Intent to do wrong is necessary for the commission of a crime and where this is absent, for whatever reason, we feel no crime has been committed. I draw on the work of Hannah Arendt and in particular her response to witnessing the trial of Adolf Eichmann in Jerusalem in 1961 [3] to illuminate the impact of algorithms on intent.

Arendt's efforts to comprehend her encounter with Eichmann led to her formulation of 'thoughtlessness' to characterise the ability of functionaries in the bureaucratic machine to participate in a genocidal process. I am concerned with assemblages of algorithmic prediction operating in everyday life and not with a regime intent on mass murder. However, I suggest that thoughtlessness, which is not a simple lack of awareness, is also a useful way to assess the operation of algorithmic governance with respect to the people enrolled in its activities.

I propose that one effect of this is to remove accountability for the actions of these algorithmic systems. Drawing on analysis of Arendt's work [4] I argue that the ability to judge is a necessary condition of justice; that legal judgement is founded on the fact that the sentence pronounced is one the accused would pass upon herself if she were prepared to view the matter from the perspective of the community of which she is a member. As we are unable to understand the judgement of the algorithms, which are opaque to us, the potential for accountability is excised. I also draw on recent scholarship to suggest that, due to the nature of algorithmic categorisation, critique of this situation is itself a challenge [5]. Taken together, these echo Arendt's conclusion that what she had witnessed had "brought to light the ruin of our categories of thought and standards of judgement".

However, Arendt's thought also offers a way to clamber out of this predicament through the action of unlearning. Her encounter with Eichmann was a shock; she expected to encounter a monster and instead encountered thoughtlessness. Faced with this she felt the need to start again, to think differently. A recent book by Marie Luise Knott describes this as unlearning, "breaking open and salvaging a traditional figure of thought and concluding that it has quite new and different things to say to us today" [6].

I conclude the paper by proposing that we need to unlearn machine learning. I suggest a practical way to do this through the application of participatory action research to the 'feature engineering' at the core of data science. I give analogous examples to support this approach and the overall claim that it is possible to radically transform the work that advanced computing does in the world.

[1] McQuillan, Daniel. 2015. Algorithmic States of Exception. European Journal of Cultural Studies, 18(4/5), ISSN 1367-5494

[2] Algorithms and Accountability Conference, Information Law Institute, New York University School of Law, February 28th, 2015.

[3] Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. 1 edition. New York, N.Y: Penguin Classics, 2006.

[4] Menke, C. & Walker, N.(2014). At the Brink of Law: Hannah Arendt’s Revision of the Judgement on Eichmann. Social Research: An International Quarterly 81(3), 585-611. The Johns Hopkins University Press.

[5] Antoinette Rouvroy. "The end(s) of critique : data-behaviourism vs. due-process." in Privacy, Due Process and the Computational Turn. Ed. Mireille Hildebrandt, Ekatarina De Vries, Routledge, 2012.

[6] Knott, Marie Luise, 2014. Unlearning with Hannah Arendt, New York: Other Press.

Jun 23, 2015

Data Luddism

[ABSTRACT FOR A PAPER IN PREPARATION: comments welcome]

In this paper I propose Data Luddism as a radical response to the productive power of big data and predictive algorithms. My starting point is not the Romantic neo-Luddism of Kirkpatrick Sale but the historical Luddism of 1811-1816, and the Luddites' own rhetoric regarding their resistance to 'obnoxious machines' [1].

The Luddites' opposition to steam-powered machines of production was based on the new social relations of power they produced, which parallels the present emergence of data-powered algorithmic machines. Drawing on my previous work on Algorithmic States of Exception [2] I outline the operations of machine learning and datamining, and the way predictive knowledge is leading to the irruption of preemption across the social field from employment to social services and policing. I assert that the consequent loss of agency and establishment of new powers unbalanced by effective rights can be fruitfully compared to the effect of new machinery on nineteenth century woolen and silk industries. Based on this I examine key aspects of Luddite resistance for their contemporary relevance.

I compare the adoption of a collective name ('General Ludd'), and the evolution of Luddism as it expanded from the customary communities of Nottinghamshire through metropolitan Manchester and the radicalised West Riding, to the trajectory of the contemporary hacktivist movement Anonymous. I highlight the political sophistication of the Luddites and the way machine breaking was situated in a cycle of negotiation, parliamentary petition and combination, and ask what this means for a contemporary resistance to data power that restricts itself to issues of privacy and ethics.

Most importantly, I assert that the Luddites had an alternative social vision of self-governance and community commons and that we, too, should posit a positive vision against the encroachment of algorithmic states of exception. However, I ask whether (in contrast to the Luddites) we can use the new machines to bring these different possibilities in to being. The Luddites saw themselves as a self-governing socius, and I consider recent experiments in technology enabled self-organisation such as 'liquid democracy' software.

Beyond this, I focus on the Luddites call to 'put down all Machinery hurtful to Commonality' to ask if we can adapt the machines to support the commons. I examine recent proposals that the blockchain (the technology behind bitcoin) can enable distributed collaborative organizations and tackle traditional issues related to shared common-pool resources, such as the free rider problem [3]. I conclude that if we are serious about resisting the injustices that could come from data-driven algorithmic preemption we have a lot to learn from the historical Luddites, but also that we have the opportunity to 'hack' the machines in the service of a positive social vision.

[1] Binfield, K. ed., 2004. Writings of the Luddites, Baltimore: Johns Hopkins University Press.

[2] McQuillan, D., 2015. Algorithmic states of exception. European Journal of Cultural Studies, 18(4-5), pp.564–576. Available at: http://ecs.sagepub.com/content/18/4-5/564

[3] David Bollier, 2015. The Blockchain: A Promising New Infrastructure for Online Commons. Available at: http://www.bollier.org/blog/blockchain-promising-new-infrastructure-online-commons

Jan 26, 2015

Auditing Algorithms

[ABSTRACT FOR A WORKSHOP]

Algorithmic Misbehavior and Wider Proactive Engagement

We welcome the Algorithm Audits workshop as an important move towards conceptualising and investigating this emerging area. We also recognise the usefulness of auditing algorithms, as described in the workshop rationale.

However, our own research hopes to extend the scope of investigating algorithms in the world from two directions; the social effects themselves, and the specific algorithmic approaches behind them. In doing so, we also hope to shift the focus from reactive to proactive intervention.

1 In particular, the research problem we would like to tackle is the investigation of emerging algorithmic states of exception[1], where the social action of the algorithms has the force of law while escaping legal constraint. We believe that the topic of 'algorithmic misbehaviour' identified in the workshop proposal is a suitable frame for this research, because it can acknowledge both unintended consequences flowing from the opacity of algorithms and the unethical appropriation of algorithms by institutional actors.

2 We propose a range of interventions to explore this possibility. These include identifying areas of algorithmic regulation where harmful effects are possible using critical pedagogy with affected communities to generate data extending the practices of software engineering to a wider set of stakeholders testing the findings through journalistic investigation

3 Therefore we suggest that three goals for discussion at the workshop should be i. how can we audit algorithms which act beyond online platforms? ii. how can we investigate algorithmic misbehaviors? using journalistic techniques; both traditional journalism and the 'social forensics' of Eliot Higgins[2] by inverting investigating 'from the outside' by situating within affected communities iii. how can we participate in software engineering in ways that opens it up to wider discussions of impact and of 'doing no harm'? Goal 3 (software engineering) recognises the limitations of audits, including software audits, as a) reactive, and b) unable to encompass all possible outcomes. We believe this connects to a wider debate around computation and ethics, where there are attempts to apply social values to software retroactively. This seems doomed to the same cycle of endless catch-ups as we find with legal regulation.

Our proposal is to widen the interpretative community in software engineering, bringing in social science, journalism, big data analytics and user community at the start rather than afterwards. In software engineering, metrics based on a set of measures are often designed to provide an indication of the quality of some representation of the software. In an approach analogous to the emerging methodologies of citizen science, we suggest that wider communities can be engaged in the following software engineering steps:

i. Derive software measures and metrics that are appropriate for the representation of software that is being considered. ii. Establish the objectives of measurement iii. Collect data required to derive the formulated metrics. iv. Analyse appropriate metrics based on pre-established guidelines and past data. v. Interprete the analytical results to gain insight into the quality of the software. vi. Recommend modifications in the software or if necessary, loop back to the beginning of the software development cycle. Overall, our research approach is one of triangulation through a multidisciplinary methodology and addressing problems through participation. We look forward to contributing to the workshop and to subsequent developments.

[1] McQuillan, Daniel. 2015. Algorithmic States of Exception. European Journal of Cultural Studies, 18(4/5), ISSN 1367-5494 (Forthcoming) [2] https://www.bellingcat.com/

Dr Dan McQuillan, Lecturer in Creative & Social Computing, Department of Computing, Goldsmiths, University of London d.mcquillan@gold.ac.uk
Dr Ida Pu, Lecturer in Computer Science, Department of Computing, Goldsmiths, University of London I.Pu@gold.ac.uk