[ABSTRACT FOR A WORKSHOP]
Algorithmic Misbehavior and Wider Proactive Engagement
We welcome the Algorithm Audits workshop as an important move towards conceptualising and investigating this emerging area. We also recognise the usefulness of auditing algorithms, as described in the workshop rationale.
However, our own research hopes to extend the scope of investigating algorithms in the world from two directions; the social effects themselves, and the specific algorithmic approaches behind them. In doing so, we also hope to shift the focus from reactive to proactive intervention.
1 In particular, the research problem we would like to tackle is the investigation of emerging algorithmic states of exception, where the social action of the algorithms has the force of law while escaping legal constraint. We believe that the topic of 'algorithmic misbehaviour' identified in the workshop proposal is a suitable frame for this research, because it can acknowledge both unintended consequences flowing from the opacity of algorithms and the unethical appropriation of algorithms by institutional actors.
2 We propose a range of interventions to explore this possibility. These include identifying areas of algorithmic regulation where harmful effects are possible using critical pedagogy with affected communities to generate data extending the practices of software engineering to a wider set of stakeholders testing the findings through journalistic investigation
3 Therefore we suggest that three goals for discussion at the workshop should be i. how can we audit algorithms which act beyond online platforms? ii. how can we investigate algorithmic misbehaviors? using journalistic techniques; both traditional journalism and the 'social forensics' of Eliot Higgins by inverting investigating 'from the outside' by situating within affected communities iii. how can we participate in software engineering in ways that opens it up to wider discussions of impact and of 'doing no harm'? Goal 3 (software engineering) recognises the limitations of audits, including software audits, as a) reactive, and b) unable to encompass all possible outcomes. We believe this connects to a wider debate around computation and ethics, where there are attempts to apply social values to software retroactively. This seems doomed to the same cycle of endless catch-ups as we find with legal regulation.
Our proposal is to widen the interpretative community in software engineering, bringing in social science, journalism, big data analytics and user community at the start rather than afterwards. In software engineering, metrics based on a set of measures are often designed to provide an indication of the quality of some representation of the software. In an approach analogous to the emerging methodologies of citizen science, we suggest that wider communities can be engaged in the following software engineering steps:
i. Derive software measures and metrics that are appropriate for the representation of software that is being considered. ii. Establish the objectives of measurement iii. Collect data required to derive the formulated metrics. iv. Analyse appropriate metrics based on pre-established guidelines and past data. v. Interprete the analytical results to gain insight into the quality of the software. vi. Recommend modifications in the software or if necessary, loop back to the beginning of the software development cycle. Overall, our research approach is one of triangulation through a multidisciplinary methodology and addressing problems through participation. We look forward to contributing to the workshop and to subsequent developments.
 McQuillan, Daniel. 2015. Algorithmic States of Exception. European Journal of Cultural Studies, 18(4/5), ISSN 1367-5494 (Forthcoming)  https://www.bellingcat.com/
Dr Dan McQuillan, Lecturer in Creative & Social Computing, Department of Computing, Goldsmiths, University of London firstname.lastname@example.org
Dr Ida Pu, Lecturer in Computer Science, Department of Computing, Goldsmiths, University of London I.Pu@gold.ac.uk