Big crisis data: generality-singularity tensions

Karolin Eva Kappler, Institute of Sociology, University of Hagen, Germany, karolin.kappler@fernuni-hagen.de

PUBLISHED ON: 15 May 2018 DOI: 10.14763/2018.2.789

Abstract

The current massive surge of digital data, measurements and new forms of (algorithmic) valuation affects emergency situations (both natural and human-made crises) and emergency management systems. By introducing ‘big crisis data’, the very concepts of emergency and crisis rely heavily on the calculations of events and crowd behaviour, constituting, controlling and shifting the interplay between different actors. From a critical data perspective, this paper focuses on the entanglements of crisis digital data assemblages with human and institutional actions, stressing the risks and challenges of the underlying data practices of two key processes - what could be called valorisation and singularisation.
Citation & publishing information
Received: December 11, 2017 Reviewed: March 19, 2018 Published: May 15, 2018
Licence: Creative Commons Attribution 3.0 Germany
Funding: The empirical research was partly funded by the FP7 SECURITY 2013 – Grant Agreement N° 606853 – SUPER: Social sensors for security assessments and proactive emergencies.
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Big data, Big crisis data, Valuation studies, Singularity
Citation: Kappler, K. E. (2018). Big crisis data: generality-singularity tensions. Internet Policy Review, 7(2). https://doi.org/10.14763/2018.2.789

This paper is part of Networked publics, a special issue of Internet Policy Review guest-edited by William H. Dutton.

Introduction

“During the early 1990s, when the web and mobile phones were still in their infancy, it often took weeks to collect detailed information on disaster damage and needs following major disasters. Towards the end of the 2000’s, thanks to the rapid growth in smartphones, social media and the increasing availability of satellite imagery plus improvements in humanitarian information management systems, the time it took to collect crisis information was shortened. One could say we crossed the 72-hour time barrier on January 12, 2010 when a devastating earthquake struck Haiti. Five years later, the Nepal earthquake in April 2015 may have seen a number of formal responders crossing the 48-hour threshold.” (Meier, 2015b)

The development of the day-to-day data practices in emergency1 management requires, on the one hand, an always growing set of available ‘big data’ (see Uprichard, 2013; Ulbricht & von Grafenstein, 2016; De Filippi, 2014). On the other hand, it challenges traditional human-centred analysis in different ways (e.g., summarised by Olteanu et al., 2015), as the collected data present an almost insurmountable obstacle for the processing power of humans. Machine-based computational approaches seem to be the only option to face this challenge, both regarding the volume and velocity of available data generated during an emergency event and, the immediate necessity to find valuable information - with little value in ex-post analyses.

In general terms, this in-time analysis of incoming Big Crisis Data (Castillo, 2016) has been consolidated and consists of such activities as predicting, detecting and monitoring crisis events and their evolution; predicting, monitoring and controlling crowd behaviour through sentiment or network analysis; detecting rumours, or classifying texts and analysing images to find specific information regarding sub-groups (e.g., people seeking help or shelter, people offering help, etc.) (see Castillo, 2016; Meier, 2015a). This data-based crisis management is constituting, controlling and shifting the interplay between different actors, such as the population directly affected by the emergency, volunteers, networked individuals (mainly through social media platforms), civil protection agencies, but also developers of algorithms, researchers, big data experts and ‘the crowd’ by means of mainly artificial intelligence (AI) driven emergency technologies. Further and as human lives are at stake, this technology-mediated arrangement of individuals and groups faces specific requirements regarding the reliability and precision of the results, while at the same time, it is confronted with such concerns as protecting the privacy of the affected individuals in especially vulnerable positions.

Therefore, the paper adopts a data-sensitive and critical perspective.

By following reflections of critical disaster studies, Crawford and Finn (2015) refer to the degree that crises are bound in space and time. They also represent unique, singular events, which – at least in the case of data-driven emergency management – have to be calculated. Hence, these practices can be interpreted as a new manifestation of “calculating the social“ (Vormbusch, 2012) and “controlling the future” (Vormbusch, 2009). Relevant aspects of the social are being redefined on the basis of calculative practices (Hopwood & Miller, 1994). Whereas critical accounting studies and parts of economic sociology are influenced by the notion that numbers should be studied as a dominant form of cognitive knowledge, this paper will study quantitative approaches to emergency management as an emerging practice in the frame of the transdisciplinary ‘valuation studies’ (Lamont, 2012). From this perspective, data driven emergency management systems presuppose the ongoing calculation and valuation of (transient) events.

The paper asks (1) what do these calculative practices look like, how do they come into practice, and (2) how do they mediate between and affect different roles of the various stakeholders involved in the crisis? The aim is not only to shed light on the interplay between different actors, but also to understand how measurements and visualisations – based on big data analysis and algorithms – draw attention to (singular) events. The next section develops the theoretical approach of this study, anchored around the idea of valuating singularities. Then, the paper presents three contrasting platforms as examples - an open platform for disaster response, Facebook’s disaster response and Google public alerts, before concluding with some remarks on the role of calculated singularities in the context of big crisis data.

Theoretical framework: Valuating singularities

From a critical perspective on data, the present paper follows transdisciplinary ‘valuation studies’ (Lamont, 2012), as the focus lies on the calculative practices and how they mediate between and affect different roles of involved actors. Valuation studies examine social practices, by which actors attribute value to social events, individuals and/or groups. Most of these studies reveal how multiple registers of worth are implicitly or (in the case of algorithms) explicitly drawn together to establish notions of worth and to make things commensurable – and thereby comparable, hence how to “attribute a monetary value to intangible things“ (Fourcade, 2011, p. 1721) and how to find valuable insights. Hence, valuation understood as a practice is performative (Krüger & Reinhart, 2017).

This permanent valuation and valorisation – as one of the key practices of current society – leads to – what Reckwitz (2017) designates as – a “society of valorisation”2. Through the process of valorisation, singular objects, individuals – but also events and groups – are recognised (and hence created) by means of their “self-complexity with inner density” (Reckwitz, 2017, p. 61). Practices of observation, valorisation, production, and appropriation (Reckwitz, 2017, p. 29) form part of “doing singularity” (Reckwitz, 2017). Reckwitz does not follow Kurzweil’s understanding of “technological singularity” (Kurzweil, 2005) in the context of artificial intelligence and transhumanism, but defines singularity as a performative social process, which follows the logic of the unique in contrast with the logic of the general (Reckwitz, 2017, p. 11). This singularisation is – according to Reckwitz – based on the interplay of three structural elements: the rise of cultural capitalism, the “postromatic revolution of authenticity” and the success of digital technologies (Reckwitz, 2017, p. 19; emphasis in the original version). Focussing on the last moment, “(t)he technological complex of computers, digitalism and the internet allows and forces a continuing fabrication of subjects, objects and collectives as unique” (Reckwitz, 2017, p. 227).

Observation, valorisation, creation and adoption form part of the practices of “doing singularity” and the so-called “work of singularisation” (Reckwitz, 2017, p. 68). By this, Reckwitz underlines the importance of the digital network which, on the one hand, does generality by standardising, classifying and typifying (Reckwitz, 2017, p. 31-36) inside the “machine-machine-interaction” (p. 232-233), while as a result it favours singularities by detecting, certifying and valuating them. In this sense, there is a permanent oscillation between generality and singularity in the day-to-day (data) practices, but – according to Reckwitz’s hypothesis – the overall tendency in contemporary society evolves towards a “society of singularities” (Reckwitz, 2017).

From this point of view, data driven emergency management systems require the permanent calculation and valuation of (transient and singular) events. By detecting and recognising events as relevant (or irrelevant) or classifying pertinent information, calculating devices are organising and – probably even more importantly – creating what is going on in the (crisis) world. By this, they render some things visible and others invisible, consequently generating and constructing events and actionable insights and therefore contributing to the “emergency imaginary” (Calhoun, 2004). This social imaginary relies on the underlying social and cultural dynamics that shape both the production of emergencies and the production of responses. Calhoun emphasises, e.g., the (current) managerial orientation in emergency relief with emergencies considered to be disruptions of normal life; the naturalisation of emergencies, attributing them to natural (and not human-made) causes; or the understanding of emergencies as short-term and solvable problems. These elements influence both perception of and action in emergencies, hence, shifting the understanding and construction of the concept itself. Hence, the emergency imaginary “is not merely a description of the world, more or less accurate, but an abstraction that plays an active role in constituting reality itself” (Calhoun, 2004, p. 17).

Methodology

The paper is based on participant observation involved with a three-year, EC-funded project, entitled SUPER - Social sensors for secUrity Assessments and Proactive EmeRgencies Management.3 It included participatory observation of technical meetings, review sessions and weekly telephone conferences, qualitative interviews with involved partners and associated end-users, questionnaires with affected and involved publics (e.g., participating volunteers in validation pilots), review of current state-of-the-art, comparison of different best practices and case studies, and presentations and group discussions with emergency agencies, experts and stakeholders.4

The following examples arise from a qualitative content analysis of this research. They were selected as illustrative examples to show data practices, involving different actors and the generalisation of valuable insights by means of different assemblages:

1. Artificial Intelligence for Disaster Response (AIDR) - as an example for an open platform dependent on crowd-sourcing work;

2. Facebook disaster and crisis response - as an example for a platformised service, mainly based on metadata analysis;

3. Google public alerts - as an example of a user-centred platformised service.

To reduce complexity, the following analysis will be limited to some key features of the corresponding actor-networks: data providers, types of data collected, types of platform and basic functioning, target groups, and the development and implementation of taxonomies.

AIDR – Artificial Intelligence for Disaster Response platform

AIDR5 combines human and artificial intelligence with the aim to find actionable insights for humanitarian organisations, such as the Red Cross, by “automatically identify(ing) needs and offers of help that were being posted on Twitter” (Meier, 2015a, p. 100). In its beginnings, it was developed by a group of data scientists at the Qatar Computing Research Institute. It is implemented as a free software platform that can run as a web application, or can be downloaded to create one’s own instance. Humanitarian organisations, individuals, or other institutions can use the open access platform for their needs and in their specific context.

The AIDR-platform works as follows: in a first step, the AIDR-platform crawls Twitter for relevant Tweets (based on specific hashtags, geolocation, etc.). Hence, in the beginning of the process, two platforms (AIDR and Twitter) have to interact through the corresponding API, with AIDR directly relying on Twitter policies: in consequence, a policy change, such as the limitation (or not) of Tweets to 140 characters, affects the whole AIDR-platform. In a second step, “the crowd tags tweets and messages they find relevant and the AI engine learns to recognize the relevance patterns in real-time, allowing AIDR to automatically identify future tweets and messages.” (Meier, 2015b) Through this process AIDR facilitates the collaboration of different (human) actors and machines to work together and to “apply human intelligence to large-scale data at high speed” (Imran et al., 2014). For this, AIDR classifies the content of the messages and does not rely on metadata. Metadata could additionally be used to geolocate the classified tweets on a corresponding map if further features were added to the existent platform. Moreover, the platform is flexible regarding the cultural context, can be used in any time frame considered necessary, requires minimal informatics knowledge, but most of all, requires a good and stable network of actors (publics), as the platforms functioning is based on the interplay between the platform itself, tweets generated by users, a crowd of volunteers to annotate the tweets, an algorithm which learns from the training data set and an organisation which uses the insights (see Imran et al., 2014).

This represents a complex interplay of different actors and data on a free platform, which offers: an open API with a commercial platform, real-time machine learning algorithms powered by crowd-sourcing to automatically identify relevant tweets and text messages in an exploding “meadow of digital data” (Meier, 2015b). Apart from its dependence on Twitter, it also relies on a close human-machine interaction, which presents specific challenges, such as enabling time-critical crowd and volunteer mobilisation, retaining volunteers or reducing the corresponding articulation work (Castillo, 2016, p. 117; Burns, 2015; Al-Ani & Stumpp, 2016). At the same time, these actors – networked through a specific AI-powered platform – constitute a complex system with its own accountability (see also Eggenschwiler, 2017). Thus, the corresponding valuating data practices are embedded in the already existing crisis practices of humanitarian aid organisations, enhancing the cultural and context-sensitive tagging and classification of humans.

Facebook – Disaster and crisis response

In cooperation with some international and humanitarian organisations (as well as experts in digital humanitarianism), Facebook has recently developed a number of services to share actionable and real-time data during disasters with humanitarian aid organisations, addressing concerns on privacy-preserving and consistence with the legal standards of data sets (Meier 2017). Its aim is to fill critical data gaps – which mainly exist in the first hours of a disaster – by means of updates every 15 minutes, providing access to data from some 1.86 billion online users (Meier, 2017).

Facebook’s evolving set of ‘services for disaster and crisis response’ includes:

1- ‘Safety Check’, which connects Facebook users with friends and family members during a disaster, inviting users, who might be affected by a crisis, to ‘check in safe’ by means of a click. Therefore, and after some initial criticism regarding the (manual and internal) activation mechanism of the safety check, Facebook implemented a new procedure combining event detection – specifically a certain number of people posting about a specific crisis – plus an authorised signal – based on an alert from one of Facebook’s third-party sources;6

2- ‘disaster or density and movement maps’, which are available for humanitarian end-user agencies through dedicated APIs and visualisations, providing information about the location and specific movements of populations at risk. These aggregated maps across time and space are mainly based on the processing of metadata (for further detail, see Maas et al., 2017).

Facebook’s disaster response services generate new data assemblages by “reassembling the social” (Striphas, 2015, p. 406), in this case centralised on a private and commercial platform, which monopolises, valuates and capitalises ‘lively data’ of people, networks, and emergencies. By this, platforms – as key players in the power play of data practices – do not only accumulate (meta-)data and generate instant and retrospective “knowledges” (Burns, 2015, p. 9), but they monopolise “network-making power” (Castells, 2009, p. 42-47). That is, they make networks in their role as programmers and switchers, constituting and (re)programming sub-networks according to new goals.

Hence, new services for crisis management not only reschedule a different network with new (institutional) partners, but the insights drawn from this network and its (mega-)data can also be (re-)connected to other sub-networks. Hence, there is a conceivable risk that different services on the same platform share the knowledge about needs of specific groups or users, in turn commercialising and capitalising on the vulnerability of an already vulnerable and affected population. By doing so, the detection of singular (crisis) events is automatically connected to the identification of singularly vulnerable groups and / or single users. In the context of the ongoing platformisation (Gillespie, 2015), this corresponds to the commercialisation of singularities of events, of groups and of victimisation, which raises major ethical implications and demands specific regulation to counteract secondary victimisation through algorithms, such as by focussing on some affected groups while forgetting others.

Google Public Alerts

In contrast to the previous two examples, Google’s alerts – designed to respond to natural disasters, severe weather warnings, and terror attacks – function as an “emergency broadcast service” (Interview I01).7 Google cooperates with government agencies, as its event detection completely relies on official and government partner agencies providing the required and hence automatically confirmed information by means of a specifically structured format. In a second step, Google expands this information with information from Google News and its traffic app Waze. By augmenting the officially provided information with real-time situational and additional context information, Google offers push-up alerts for registered users, personalised maps with specific (local) information on shelter, and the availability of other emergency relief. Google’s main challenge lies in “making its alerts more actionable” (Interview I01), struggling with end-users’ data (il)literacy, their lack of local knowledge, or specific problems linked to stress reactions in emergency situations. Consequently, Google aims to provide valuable personalised information and actionable insights for each specific user.

This example differentiates from the previous two platforms, as it defines its own users as the target group of the calculated and valorised actionable insights. Continuing the reflection on the platformisation of big crisis data from the previous example, Finn et al. (2017) write accordingly “that the use of big data in crisis exemplifies the Janus-faced nature of surveillance, as crises are a key area in which the “care” elements of surveillance practices emerge, but where control elements of surveillance may also be apparent.” Consequently, Google’s aim to provide ‘actionable insights’ for its users through Google alerts combines practices of care and surveillance, e.g., using users’ geolocation data in order to detect their location, to offer them the right information for that specific location, to follow their movements, etc. by means of calculating and valuing their singularities. In order to mitigate some of the corresponding risks, the involvement of humanitarian organisations seems to be crucial (see Finn et al., 2017), as they have already established protocols on emergency-sensitive practices.

Conclusion

The examples on the interplay of different actors and platforms in big crisis data highlighted three specific aspects, which are relevant for further ethical reflections:

Table 1: Key features of data practices and involved actors
 

AIDR

Facebook disaster and crisis response

Google public alerts

Data provider

Twitter users / other platform via API

FB users / own platform (all data – also retrospectively available)

trusted government and news sources, as well as aggregate information from Google News and the traffic app Waze

Type of data

analysis of content data

specific content data, but mainly analysis of metadata

combination of content and metadata

Platform

open platform

closed commercial platform

closed commercial platform

Target group

humanitarian aid organisations

humanitarian aid organisations

Google users

Development and implementation of taxonomies

humanitarian aid organisations / crowd of digital humanitarian volunteers / algorithm

FB – engineers and developers + experts in humanitarian field

Google – engineers and developers + experts in humanitarian field

First, the ‘flowing’ of data from one platform to another, such as the case of Twitter-data to the AIDR-platform or from ‘trusted government and news sources’ to Google, stresses the networked, hybrid, and flexible character of data, which get permanently assembled and reassembled. Van Dijck (2014) describes this dynamic as the “gradual normalization of datafication as a new paradigm in science and society” (van Dijck, 2014; emphasis in the original version). In this sense, datafication is defined as the “transformation of social action into online quantified data, this allowing for real-time tracking and predictive analysis” (Mayer-Schönberger & Cukier, 2013). These unfolding (meta-)data practices affect human beings, as their singularity is defined by means of their datafied assemblages, representing them as simple “dividuals” (Deleuze, 1992). But they also redefine the policies and concrete help in emergency events, by introducing new forms of valorisation and consequently new forms of biopolitical and technological control or in other words “dataveillance” (Pasquinellli, 2014, p. 328). Future ethical guidelines8 should explicitly address the livelihood of big crisis data, considering not only the implications of (available) data, but also the assumed biases introduced by such attributes as missing data or the reification of “social and power relations, worldviews and epistemologies” (Boersma & Fonio, 2018, p. 4). Further, the protection of especially vulnerable and victimised populations in order to avoid possible (algorithmic) revictimisation should be considered in these reflections.

Second, the platformisation of emergencies transfers their detection and description into the logic of data monopolisation, commercialisation, or crowd-activation, translating “local knowledges into practices that transcend the local situation” (Imran et al., 2014, p. 20) and transferring them into attention-driven, accelerated time dynamics. This new logic affects the “emergency imaginary” (Calhoun, 2004), as it focuses on the acute catastrophe, neglecting the long-term reasons and consequences, “(reifying) a problematic short-term conception of disasters” (Crawford & Finn, 2015, p. 493). By doing so, big crisis data backs the focus on the hazard, hence the agent of an emergency, instead of the disaster. The latter refers to the “social phenomena, characterized by a disruption of routine and of social structure, norms, and/or values” (Castillo, 2016, p. 14). Following this critical perspective, a much longer view is necessary to understand a disaster, and particularly the suffering during and after a disaster (Erikson, 1976). A specific challenge for future ethical guidelines could be how to combine both time frames to favour of possible synergies.

This leads to a third point: on the one hand, the (datafied) detection of crisis events or relevant information thereon can be described as “doing singularity”, producing power shifts and hence “algorithmic states of exception” (McQuillan, 2015). But on the other hand, the very search and valorisation of singularities leads to the tendency to generalise beyond these events, groups or information, as big crisis data follows the short time-frame of social media and its patterns of production and attention. This short-term perspective is likely to induce corresponding shifts in the “emergency imaginary” (Calhoun, 2004), requiring more and more new and singular data in order to detect, classify and describe singular events and actors.

Hence, During the 2017 hurricane season, with more than a dozen named storms, ten hurricanes, and six major hurricanes, the interplay between singular and normalised events could be studied, as the exceptionality became the new norm, sidelining the less datafied and mediatised emergency situations. The singularity of events becomes the new (unintended) general pattern, which repeats the imaginary norm of exceptionality to the degree that (at least smaller and locally-limited) emergencies are normalised.

The algorithmic undermining of the states of exception by intervening in the balance between singular and general patterns raises ethical implications not only for upcoming crises and the involved actors, defined through their singularity; but in particular for the no longer singular crises and actors, which do not anymore receive the attention, they – might well – need.

References

Al-Ani, A. & Stumpp, S. (2016). Rebalancing interests and power structures on crowdworking platforms. Internet Policy Review, 5(2). doi:10.14763/2016.2.415

Boersma, K. & Fonio, C. (Eds.). (2018). Big Data, Surveillance and Crisis Management. London & New York: Routledge.

Burns, R. (2015). Rethinking big data in digital humanitarianism: practices, epistemologies, and social relations. GeoJournal, 80(4), 477-490. doi:10.1007/s10708-014-9599-x

Calhoun, C. (2004). A world of emergencies: Fear, intervention and the limits of cosmopolitan order. Canadian Review of Sociology / Revue canadienne de sociologie, 41(4), 373-395. doi:10.1111/j.1755-618X.2004.tb00783.x

Castells, M. (2009). Communication Power. Oxford: Oxford University Press.

Castillo, C. (2016). Big Crisis Data. Social Media in Disasters and Time-Critical Situations. Cambridge: Cambridge University Press.

Crawford, K. & Finn, M. (2015). The limits of crisis data: analytical and ethical challenges of using social and mobile data to understand disasters. GeoJournal, 80(4), 491-502. doi:10.1007/s10708-014-9597-z

De Filippi, P. (2014). Big data, big responsibilities. Internet Policy Review, 3(1). doi:10.14763/2014.1.227

Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3-7. Retrieved from http://www.jstor.org/stable/778828

Eggenschwiler, J. (2017). Accountability challenges confronting cyberspace governance. Internet Policy Review, 6(3). doi:10.14763/2017.3.712

Erikson, K.T. (1976). Everything in its path: Destruction of community in the Buffalo Creek Flood. New York: Simon & Schuster.

Fourcade, M. (2011). Cents and Sensibility: Economic Valuation and the Nature of “Nature”. American Journal of Sociology, 116(6), 1721-77. doi:10.1086/659640

Gillespie, T. (2015). Platforms Intervene. Social Media + Society, 1(1). doi:10.1177/2056305115580479

Hopwood, A. & Miller, P. (1994). Accounting as social and institutional practice. Cambridge: Cambridge University Press.

Imran, M., Castillo, C., Lucas, J., Meier, P., & Vieweg, S. (2014). AIDR: artificial intelligence for disaster response. In WWWW ’14 Companion: Proceedings of the 23rd international conference on World Wide Web (pp. 159–162). Seoul: ACM Press. doi:10.1145/2567948.2577034

Krüger, A.K. & Reinhart, M. (2017). Theories of Valuation – Building Blocks for Conceptualizing Valuation Between Practice and Structure. Historical Social Research, 42(1), 263-285. doi:10.12759/hrs.42.2017.1.263-285

Kurzweil, R. (2005). The Singularity Is Near. New York: Viking.

Lamont, M. (2012). Toward a comparative Sociology of Valuation and Evaluation. Annual Review of Sociology, 38(1), 201-221. doi:10.1146/annurev-soc-070308-120022

Lewandowsky, S., Gignac, G.E. & Oberauer, K. (2013). The Role of Conspiracist Ideation and Worldviews in Predicting Rejection of Science. PLOS ONE, 10(8), e0134773. doi:10.1371/journal.pone.0134773

Maas, P., Nayak, C., Dow, A., Gros, A., Mason, W., Filiz, I. O., … Patel, D. (2017, June 7). Facebook Disaster Maps: Methodology. Retrieved from https://research.fb.com/facebook-disaster-maps-methodology

McQuillan, D. (2015). Algorithmic States of Exception. European Journal of Cultural Studies, 18(4-5). doi:10.1177/1367549415577389

Mayer-Schönberger, V. & Cukier, K. (2013). Big Data. A Revolution That Will Transform How We Live, Work and Think. London: John Murray.

Meier, P. (2015a). Digital Humanitarians. How BIG DATA Is Changing the Face of Humanitarian Response. Boca Raton, FL: CRC Press, Taylor & Francis Group.

Meier, P. (2015b, March 16). Artificial Intelligence Powered by Crowdsourcing: The Future of Big Data and Humanitarian Action. Retrieved March 16, 2018, from https://irevolutions.org/2015/03/16/humanitarian-artificial-intelligence/

Meier, P. (2017, June 7). The Future of Crisis Mapping is Finally Here. Retrieved March 16, 2018, from https://irevolutions.org/2017/06/07/crisis-mapping-future/

Pasquinelli, M. (2014). Der italienische Operaismo und die Informationsmaschine. In R. Reichertz (ed), Big Data: Analysen zum digitalen Wandel von Wissen, Macht und Ökonomie (pp. 313-332). Bielefeld: Transcript Verlag.

Pasquinelli, M. (2015a). Italian Operaismo and the Information Machine. Theory, Culture & Society, 32(3), 49-68. doi:10.1177/0263276413514117

Pasquinelli, M. (2015b, January). Anomaly Detection: The Mathematization of the Abnormal in Metadata Society. Panel talk presented at the Transmediale 2015, Berlin. Retrieved from https://transmediale.de/content/presentation-by-matteo-pasquinelli-all-watched-over-by-algorithms

Pasquinelli, M. (2017). Arcana Mathematica Imperii: The Evolution of Western Computational Norms. In M. Hlavajova & S. Sheikh (Eds.), Former West: art and the contemporary after 1989. Cambridge, MA: The MIT Press.

Olteanu, A., Vieweg, S., & Castillo, C. (2015). What to Expect When the Unexpected Happens: Social Media Communications Across Crises. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 994–1009). New York, NY, USA: ACM. doi:10.1145/2675133.2675242 Retrieved from http://chato.cl/papers/olteanu_vieweg_castillo_2015_social_media_crises_transversal_study.pdf

Power, M. (2004). Counting, control and calculation: Reflections on measuring and measurement. Human Relations, 57(6), 765-783. doi:10.1177/0018726704044955

Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4-5), 395–412. doi:10.1177/1367549415577392

Reckwitz, A. (2017). Die Gesellschaft der Singularitäten. Berlin: Suhrkamp.

Ulbricht, L. & von Grafenstein, M. (2016). Big data: big power shifts?. Internet Policy Review, 5(1). doi:10.14763/2016.1.406

Uprichard, E. (2013, October 1). Focus: Big Data, Little Questions? Discover Society. Retrieved from https://discoversociety.org/2013/10/01/focus-big-data-little-questions/

van Dijk, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197-208. Retrieved from https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/datafication

Vollmer, H. (2007). How to Do More with Numbers. Elementary Stakes, Framing, Keying, and the Three-Dimensional Character of Numerical Signs. Accounting, Organizations and Society, 32, 577-600. doi:10.1016/j.aos.2006.10.001

Vormbusch, U. (2009, February). Controlling the Future – Investing in People. Paper presented at the London School of Economics and Political Science, Department of Accounting, London. Retrieved from https://www.academia.edu/9774942/Controlling_the_Future_-_Investing_in_People

Vormbusch, U. (2012). Die Herrschaft der Zahlen. Zur Kalkulation des Sozialen in der kapitalistischen Moderne. Frankfurt & New York: Campus Verlag.

Footnotes

1. The terms emergency, crisis, disaster, or catastrophe are used as synonyms in this paper, although slight differences exist between them. Further, they generally include both naturally caused, as well as human-made emergencies. The current paper centreson natural disasters, as the focus lies more on the policy implications of event detection and classification problems, and less on disinformation and rumour detection, the latter being more prominent during human-induced crises (Lewandowsky et al., 2013).

2. All terms and citations from Reckwitz (2017) are translated from German by the author. The English version of the book will be published by Polity Press.

3. The project ran from April 2014-March 2016. For more information please see: http://super-fp7.eu/

4. The analysis is based on the following material: 31 qualitative interviews with developers of algorithms and services, researchers, stakeholders and practitioners in emergencies; 48 research reports, 18 scientific publications, websites and blogs; five field diaries of group discussions and meetings; two questionnaires with affected and involved stakeholders (e.g., participating volunteers in validation pilots).

5. For further information please see: http://aidr.qcri.org/ The following analysis is based on interviews with and publications of the researchers involved in the development of the platform.

6. See https://www.facebook.com/about/crisisresponse/ The following analysis is based on document review and field diaries.

7. The analysis is based on interviews with researchers - mainly engineers, and developers at Google (on location in Zurich, Munich, California) working in the areas of Google Alerts, Google Cloud Platform, safety issues; as well as a document review and reports.

8. Due to the networked and platformised entanglements of actors, only international organisations can lead the initiative of the development of and compliance with these guidelines.

Add new comment