The violence of the majority: Rethinking AI positionality in decision-making

Mennatullah Hendawy, Center for Advanced Internet Studies (CAIS), Bochum, Germany

PUBLISHED ON: 13 Jan 2025

Acknowledgements

I would like to express my gratitude to Dr. Zhiyu Lin for the insightful thoughts shared during our conversation at the Game User Interaction and Intelligence Lab, University of California, Santa Cruz, in September 2024. His ideas and perspectives served as the spark that inspired this article and significantly enriched my understanding of AI applications in decision-making.

The concept of consensus has long been celebrated as a cornerstone of democratic decision-making. However, when viewed through the lens of AI, consensus can reveal a darker side – what some call the "violence of the majority." AI systems, trained predominantly on majority perspectives, often marginalise the voices of underrepresented groups (Ricaurte, 2022; Shams et al., 2023). This exclusion is not necessarily the fault of engineers or designers but rather a structural issue rooted in insufficient or biased data. AI systems built on common population distributions inherently fail populations outside those datasets, leading to a vicious cycle where the underrepresented become further excluded (Buolamwini & Gebru, 2018).

Addressing these inequities begins with a fundamental question: Who builds the indicators and metrics that AI uses to guide decisions? Current systems, reliant on datasets that reflect societal biases, struggle to adapt without data representative of marginalised communities (Bose, 2024; Noriega Campero, 2019). This inability to "actively explore" or acquire new information exacerbates AI's limitations, particularly in contexts of distributional shifts, where the input data diverges significantly from the system's training data (Cherepanova et al., 2022). These challenges are particularly acute when AI interacts with underrepresented groups, risking reinforcement of existing disparities under the guise of objectivity.

The issue lies not merely in data imbalances but in the structural design of AI systems. For example, AI-based agricultural tools deployed in the Global South often fail to account for local languages, customs, or gender dynamics. In India, predictive tools designed for crop yield optimisation have struggled to address the unique challenges faced by smallholder farmers, particularly women, who often lack access to the technologies assumed to be universal (Rupavatharam et al., 2023). Including such global examples broadens the discussion and highlights the universal need for inclusive AI systems.

In fact, many AI models are optimised for accuracy across the largest dataset segments, inherently sidelining minority experiences (Buolamwini & Gebru, 2018). This approach reflects a utilitarian logic that prioritises efficiency over equity, creating a self-reinforcing cycle of exclusion (Ravanera & Kaplan ,2021). Consider facial recognition algorithms, which consistently perform worse for people with darker skin tones – a glaring example of how datasets drawn predominantly from lighter-skinned populations lead to biased outcomes (Buolamwini & Gebru, 2018). These biases are not incidental; they stem from the metrics and indicators chosen to guide AI decisions, metrics often defined without the input of marginalised groups (Ferrer et al., 2021; Hagendorff, 2022).

Moreover, AI’s limitations become even more pronounced in contexts of distributional shifts, where input data diverges from the training data. These shifts often occur in rapidly changing environments, such as public health crises, climate disasters, or sociopolitical upheavals (Rauba et al., 2024). In such cases, AI systems are prone to failure, as they cannot autonomously acquire or interpret new data reflective of the altered context. This inability to adapt is particularly detrimental for marginalised communities, whose experiences may deviate most significantly from the normative patterns encoded in AI models (Ofosu-Asare, 2024). For example, during the COVID-19 pandemic, predictive models for resource allocation often failed to account for the unique vulnerabilities of minority populations. To rectify this, AI systems must be designed not only to "learn" from historical data but to actively seek out and incorporate new information that captures diverse and shifting realities.

So what? The need to rethink the positionality of AI

The root of the problem lies in the positionality of those building and deploying AI systems. Positionality refers to the social and cultural context that influences an individual's identity, perspectives, and assumptions, which in turn affect how they engage with knowledge production, decision-making, and interactions with others. In academic research and critical theory, positionality emphasises the acknowledgment of one's own background, privileges, biases, and power dynamics in shaping their perspective. Positionality involves recognising how factors such as race, gender, socioeconomic status, and cultural background shape an individual's worldview. This concept is critical in fields like anthropology, sociology, and AI ethics, as it highlights the importance of reflexivity and the recognition of potential biases in knowledge creation and system design (O’Neill, 2024; Rose, 1997). In the context of AI, positionality is increasingly discussed as a factor influencing the design, development, and deployment of AI systems. For example, the creators’ positionality can lead to blind spots in system design, reinforcing existing biases and excluding marginalised voices (Aguilar et al., 2023).

In the context of AI, metrics and indicators, the lifeblood of AI decision-making, are rarely neutral. They are shaped by the perspectives, priorities, and blind spots of their creators. If the creators lack diverse representation, the resulting systems will reflect those gaps (Roberts, 2022). For instance, in predictive policing, data about past arrests often reflects systemic racism in law enforcement. When such data is fed into AI, the system perpetuates the same discriminatory patterns under the guise of scientific objectivity (Rossbach, 2023).

Addressing these inequities requires a fundamental shift in who builds and validates the systems. This involves not just diversifying the teams that create AI but also questioning the values and assumptions embedded in the technology (Cachat-Rosset & Klarsfeld, 2023). Positionality must be recognised as a critical factor in AI ethics, one that demands the inclusion of underrepresented perspectives at every stage – from data collection to algorithmic design and deployment.

References

Aguilar, N., Landau, A., Mathiyazhagan, S., Auyeung, A., Dillard, S., & Patton, D. (2023). Applying reflexivity to artificial intelligence for researching marginalized communities and real-world problems. Proceedings of the 56th Hawaii International Conference on System Sciences. https://hdl.handle.net/10125/102719

Bose, M. (2024). Bias in AI: A societal threat: A look beyond the tech. In R. Pandey, N. Srivastava, R. Prasad, J. Prasad, & M. B. Garcia (Eds.), Advances in Computational Intelligence and Robotics (pp. 197–224). IGI Global. https://doi.org/10.4018/979-8-3693-4326-5.ch009

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency, 77–91. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

Cachat-Rosset, G., & Klarsfeld, A. (2023). Diversity, equity, and inclusion in artificial intelligence: An evaluation of guidelines. Applied Artificial Intelligence, 37(1), 2176618. https://doi.org/10.1080/08839514.2023.2176618

Cherepanova, V., Reich, S., Dooley, S., Souri, H., Dickerson, J., Goldblum, M., & Goldstein, T. (2023). A deep dive into dataset imbalance and bias in face identification. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 229–247. https://doi.org/10.1145/3600211.3604691

Ferrer, X., Nuenen, T. V., Such, J. M., Cote, M., & Criado, N. (2021). Bias and Discrimination in AI: A Cross-Disciplinary Perspective. IEEE Technology and Society Magazine, 40(2), 72–80. https://doi.org/10.1109/MTS.2021.3056293

Hagendorff, T. (2022). Blind spots in AI ethics. AI and Ethics, 2(4), 851–867. https://doi.org/10.1007/s43681-021-00122-8

Noriega Campero, A. (2019). Human and artificial intelligence in decision systems for social development. Doctoral dissertation, Massachusetts Institute of Technology. https://www.media.mit.edu/publications/human-and-artificial-intelligence-in-decision-systems-for-social-development/

Ofosu-Asare, Y. (2024). Cognitive imperialism in artificial intelligence: Counteracting bias with indigenous epistemologies. AI & SOCIETY. https://doi.org/10.1007/s00146-024-02065-0

O’Neill, D. (2024). Complicated shadows: A discussion of positionality within educational research. Oxford Review of Education, 1–16. https://doi.org/10.1080/03054985.2024.2351445

Rauba, P., Seedat, N., Kacprzyk, K., & van der Schaar, M. (2024). Self-healing machine learning: A framework for autonomous adaptation in real-world environments (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2411.00186

Ravanera, C., & Kaplan, S. (2021). An equity lens on artificial intelligence. https://www.gendereconomy.org/wp-content/uploads/2021/09/An-Equity-Lens-on-Artificial-Intelligence-Public-Version-English-1.pdf#page=4.06

Ricaurte, P. (2022). Ethics for the majority world: AI and the question of violence at scale. Media, Culture & Society, 44(4), 726–745. https://doi.org/10.1177/01634437221099612

Roberts, P. (2018). Lethal artificial intelligence and autonomy [Conference report]. Royal United Services Institute for Defence and Security Studies. https://static.rusi.org/20181214_conference_report_lethal_ai_and_autonomy_web.pdf

Rose, G. (1997). Situating knowledges: Positionality, reflexivities and other tactics. Progress in Human Geography, 21(3), 305–320. https://doi.org/10.1191/030913297673302122

Rossbach, N. (2023). Innocent until predicted guilty: How premature predictive policing can lead to a self-fulfilling prophecy of juvenile delinquency. Fla. L. Rev, 75, 167.

Rupavatharam, S., Patil, M., Gogumalla, P., & Jat, M. L. (2023). Digital technologies and applications for agricultural transformation. National Symposium on Digital Farming: The Future of Indian Agriculture, 20-29 ,. https://www.agrophysics.in/pdf/Souvenir%20Bhopal%20(6).pdf#page=29

Shams, R. A., Zowghi, D., & Bano, M. (2023). AI and the quest for diversity and inclusion: A systematic literature review. AI and Ethics. https://doi.org/10.1007/s43681-023-00362-w