“But what is the alternative?!” - The impact of generative AI on academic knowledge production in times of science under pressure
Generative AI in research
Generative AI (genAI) is increasingly influencing academic knowledge production. While estimates vary on GenAI’s current use in academic research (Desaire et al., 2024; Strzelecki, 2025), rapid technological advancements (e.g., OpenAI’s deep research) and growing adoption suggest it will significantly shape the field. To this end, surveys among researchers (Al-Zahrani, 2024; Andersen et al., 2025) or comments of scholars (Huang & Tan, 2023) have voiced optimism regarding the role of genAI for academic research, for instance in increasing efficiency or improving the quality of research. Nevertheless, the use of genAI in research also raises a number of ethical challenges (Breuer, 2023; Selwyn, 2022). One key issue is transparency. Several high-impact publications have been flagged for including AI-generated content without disclosing its use, highlighting a lack of clear standards for acknowledging generative AI in scholarly work (Strzelecki, 2025). Also, genAI systems perpetuate societal biases (Bender et al., 2021), providing inaccurate information, lacking contextual understanding, or raising privacy concerns (Slattery et al., 2024; Zeng et al., 2024). With the political alignment of Big Tech companies with the far-right US government under Donald Trump, ethical questions regarding the use of genAI for (impartial) knowledge generation are even more questionable. As generative AI tools become more embedded in research workflows, the academic community must urgently address these issues and develop robust guidelines for transparent and responsible AI use. Therefore, it is necessary to foreground a prospective vision: how genAI could influence academic work in the near future? What consequences can be drawn from future scenarios to bolster responsible use?
Scenarios on the future of academic knowledge production
To inform our perspective, we draw on insights from a workshop organized by the CAIS Research Innovation Hub in December 2024, as part of an internal series on future research methods. The workshop explored potential future paths and impacts of genAI in scientific research. Eight scholars from diverse cultural (e.g., Germany, Russia, Brazil) and disciplinary (e.g., computer science, sociology, psychology) backgrounds as well as varying career levels (doctoral, postdoctoral, professorial), participated. We applied Scenario-Based Sociotechnical Envisioning (SSE), a method developed for measuring technology impacts with a focus on the socio-technical contexts of impacts (Kieslich et al., 2025a).
The scenarios elaborated as part of the workshop reveal key themes that we build on in the following sections. The scenarios describe potential developments of an intensified use of genAI in academic research, thereby projecting current considerations into the near future. Drawing from these narratives–partly illustrated by original quotes from the written scenarios–we outline in the following what, in our view, deserves particular attention when assessing the role of generative AI in academic research.
Negative AI impacts as a symptom of structural issues
Using genAI merely emerges as a consequence of a larger structural issue: the state of the academic system. Academic scholars experience immense pressure due to obligations like teaching, applying for funding, and writing research papers (not even mentioning a work-life balance) – this is independent of their career level or background. In these scenarios, genAI is described as a shortcut to help navigate the workload. But this comes at a cost, whether it is the production of inaccurate (but sometimes undetected) results, or an atmosphere of mistrust between students and faculty. This points to the need for a change in the pressure for academic scholars. It may even imply a change of the academic reward systems, such as greater availability of tenure-track positions in universities.
Efficiency at a cost: Synthetic data and the risk of misrepresentation
The economic incentives of time and cost savings suggest that the use of synthetic data will likely become a pressing issue, warranting robust ethical discussions within the research community. Some of the scenarios described rather basic genAI applications like translation tasks, and were generally not associated with significant ethical concerns. However, other scenarios raised more complex moral questions. For instance, some scenarios described how researchers created artificial subjects (synthetic data) as a way to simulate research findings as one scenario exemplifies: “he made prompts describing different people from marginalized groups, which, according to him, makes his research more inclusive. Apparently it doesn’t matter to him or the reviewers that these people do not exist.” (Scenario 5). While empirical work is already being conducted (Argyle et al., 2022; Park et al., 2023), the scenarios warn of potential negative consequences (e.g., “The data she ‘gathers’ this way, is not only not grounded in reality, the AI hallucinates and provides a far better picture of the post conflict rehabilitation of Tamil minorities than it actually exists on the ground.” (Scenario 3)). This issue is further exacerbated by the inherent biases in the data sets underlying genAI models, which reflect and amplify systemic misrepresentation of certain perspectives (Bender et al., 2021; Hovy & Prabhumoye, 2021) compounding the ethical concerns around the use of synthetic data. We therefore advocate for a deeper and more critical discussion around the ethics of the use of synthetic data.
Researcher must be supported in trade-off decisions
GenAI forces researchers to make value-laden trade-offs. For instance, one scenario describes: “She was surprised to learn how many academic projects argued that AI is a great solution for many issues concerning migration. Yes, sure, in order to get funding you better have AI as a part of your proposal or topic, in any field. But it is amazing that researchers take genAI so uncritically and prioritise it over real, and not superficial, solutions, for social problems.” (Scenario 5). This projects a troubling trend: a system where reliance on generative AI becomes not just acceptable but expected (Andersen et al., 2025), disadvantaging researchers who question its use or refuse to rely on such tools.
Another scenario captures the struggle of researchers when weighing up efficiency gains with risks of faulty AI systems: “But what is the alternative? Work on a voluntary basis? Spend your own savings on research? Or go the easy way and put large parts of your research into the hands of an AI that is likely to hallucinate at some point?” (Scenario 4). This is an even more pressing issue given the recent push of US-based Big Tech companies to disband their ethics programmes. The introduction of so-called “free-speech” or “uncensoring” approaches will probably lead to an increasing dissemination of fake news and the lowering of ethical standards. Without rigorous ethical safeguards, the integrity of genAI systems remains questionable, making it all the more critical for academia to approach these tools with caution.
We argue that researchers need support for navigating these trade-offs. One way to address this issue is for academic institutions to engage more intensively in self-regulation and (mandatory) ethics commitments (for an overview of existing guidelines see e.g. Luo, 2024). While there is some acknowledgement of these questions in selected fields (e.g., computer science), we argue for an even broader discussion around these issues. We further strongly advocate for a robust human element in the research process; researchers, journals and conference proceedings must make a clear stance on the use of genAI within their respective domains. Transparent guidelines and ethical commitments are essential to safeguarding the integrity of academic research. Without such measures, we risk eroding not only trust in academic outputs but also the values that underpin the research community itself.
Concluding remarks
The scenarios show clearly: the use of GenAI is not a solution, but a symptom of an overburdened and crisis-ridden academic system. The immense pressure on scientists might force them into a dangerous dependency on AI-generated content – with drastic consequences: flawed research, a loss of trust in academic results, and a lack of representation. Instead of relying on AI-supported efficiency as a tool to escape intense work pressure, a fundamental reform of the scientific culture is needed. Without clear ethical guidance and a rejection of the short-term efficiency logic, genAI threatens to deform academic research. The choice is clear: either we strongly advocate for transparency, scientific integrity and reforms of the academic working conditions (e.g. more unlimited contracts, less publication pressure) – or we allow AI-driven automation to undermine the core of academic excellence.
Acknowledgements
We thank the participants of the workshop for their engaging discussions and input. In the workshop, we provided participants with a description of genAI, outlining its capabilities and limitations, and tasked them to write a short scenario (approx. 300 words) about the potential impact of genAI on academic knowledge production in five years from now. All materials and scenarios are available at the project's Open Science Framework (OSF) page: https://osf.io/8sdgh/. We followed the SSE guidebook to analyse the scenarios (Kieslich et al., 2025b), using thematic analysis and excerpting to find structures and themes in the scenarios (Glaser & Strauss, 2017).
The workshop was facilitated by the Center for Advanced Internet Studies (CAIS) in Bochum. Kimon Kieslich is funded by Underwriters Laboratories (UL) Research Institutes through the Center for Advancing Safety of Machine Intelligence (CASMI).
Both authors contributed equally to this op-ed.
References
Al-Zahrani, A. M. (2024). The impact of generative AI tools on researchers and research: Implications for academia in higher education. Innovations in Education and Teaching International, 61(5), 1029–1043. https://doi.org/10.1080/14703297.2023.2271445
Andersen, J. P., Degn, L., Fishberg, R., Graversen, E. K., Horbach, S. P. J. M., Schmidt, E. K., Schneider, J. W., & Sørensen, M. P. (2025). Generative Artificial Intelligence (GenAI) in the research process – A survey of researchers’ practices and perceptions. Technology in Society, 81, 102813. https://doi.org/10.1016/j.techsoc.2025.102813
Argyle, L. P., Busby, E. C., Fulda, N., Gubler, J., Rytting, C., & Wingate, D. (2022). Out of One, Many: Using Language Models to Simulate Human Samples. https://doi.org/10.48550/arXiv.2209.06899
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
Breuer, J. (2023). Putting the AI into social science: How artificial intelligence tools are changing and challenging research in the social sciences. In A. Sudmann, A. Echterhölter, M. Ramsauer, F. Retkowski, J. Schröter, & A. Waibel (Eds.), Beyond Quantity (pp. 255–274). transcript Verlag. https://doi.org/10.1515/9783839467664-014
Desaire, H., Isom, M., & Hua, D. (2024). Almost Nobody Is Using ChatGPT to Write Academic Science Papers (Yet). Big Data and Cognitive Computing, 8(10), Article 10. https://doi.org/10.3390/bdcc8100133
Glaser, B., & Strauss, A. (2017). Discovery of grounded theory: Strategies for qualitative research. Routledge.
Hovy, D., & Prabhumoye, S. (2021). Five sources of bias in natural language processing. Language and Linguistics Compass, 15(8), e12432. https://doi.org/10.1111/lnc3.12432
Huang, J., & Tan, M. (2023). The role of ChatGPT in scientific communication: Writing better scientific review articles. American Journal of Cancer Research, 13(4), 1148–1154.
Kieslich, K., Helberger, N., & Diakopoulos, N. (2025a). Scenario-Based Sociotechnical Envisioning (SSE): An Approach to Enhance Systemic Risk Assessments. https://doi.org/10.31235/osf.io/ertsj_v1
Kieslich, K., Helberger, N., & Diakopoulos, N. (2025b). Scenario-based Sociotechnical Envisioning (SSE): The Guidebook. https://doi.org/10.31235/osf.io/j5ske_v1
Luo, J. (2024). A critical review of GenAI policies in higher education assessment: a call to reconsider the “originality” of students’ work . Assessment & Evaluation in Higher Education, 49(5), 651–664. https://doi.org/10.1080/02602938.2024.2309963
Park, J. S., O’Brien, J., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative Agents: Interactive Simulacra of Human Behavior. Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 1–22. https://doi.org/10.1145/3586183.3606763
Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), 620–631. https://doi.org/10.1111/ejed.12532
Slattery, P., Saeri, A. K., Grundy, E. A. C., Graham, J., Noetel, M., Uuk, R., Dao, J., Soroush Pour, Casper, S., & Thompson, N. (2024). The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence. https://doi.org/10.13140/RG.2.2.28850.00968
Strzelecki, A. (2025). ‘As of my last knowledge update’: How is content generated by ChatGPT infiltrating scientific papers published in premier journals? Learned Publishing, 38(1), e1650. https://doi.org/10.1002/leap.1650
Zeng, Y., Klyman, K., Zhou, A., Yang, Y., Pan, M., Jia, R., Song, D., Liang, P., & Li, B. (2024). AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2406.17864