Avoiding the kitchen sink: A guide to mixed methods approaches within digital rights governance

Gabrielle Lim, Political Science, University of Toronto, Canada
Noura Aljizawi, University of Toronto, Canada
Shaila Baran, University of Toronto, Canada
Nicola Lawford, MIT, United States of America

PUBLISHED ON: 18 Nov 2025 DOI: 10.14763/2025.4.2044

Abstract

Digital rights governance is a multidisciplinary field requiring interdisciplinary perspectives and methodological approaches. Yet, relatively little has been written about when to combine research methods and why. In this article, we present a scoping review of five recent volumes of Internet Policy Review (vols. 10–14), analysing 141 articles to assess the prevalence of interdisciplinary scholarship and its relationship to single-, multi-, and mixed methods research. We find that two-thirds of articles draw from multiple disciplines, and interdisciplinary work is more likely than single-discipline work to employ more than one method (62% vs. 38%). However, disciplinary combinations remain uneven, with legal and policy studies dominating and multi-quantitative methods underrepresented. These findings highlight both the promise and limitations of interdisciplinarity in digital rights governance. We argue that greater reflexivity is needed regarding which disciplines and methods are combined, why certain approaches prevail, and how mixed or multi-method designs can better support collaboration. When thoughtfully applied, multi- and mixed methods can deepen analyses and provide a more holistic understanding, and indiscriminately combining methods without clear justification—the so-called ‘kitchen sink’ approach—risks overcomplicating research agendas and diluting their insights. In doing so, this paper contributes to ongoing debates about the methodological and normative foundations of interdisciplinary internet policy research.

Citation & publishing information
Received: Reviewed: Published: November 18, 2025
Licence: Creative Commons Attribution 3.0 Germany
Funding: The authors did not receive any funding for this research.
Competing interests: The authors have declared that no competing interests exist that have influenced the text.
Keywords: Mixed-methods, Multimethod research, Digital rights, Scoping review, Interdisciplinarity
Citation: Lim, G., Aljizawi, N., Baran, S., & Lawford, N. (2025). Avoiding the kitchen sink: A guide to mixed methods approaches within digital rights governance. Internet Policy Review, 14(4). https://doi.org/10.14763/2025.4.2044

This paper is part of The craft of interdisciplinary research and methods in public interest cybersecurity, privacy, and digital rights governance, a special issue of Internet Policy Review, guest-edited by Adam Molnar, Diarmaid Harkin, and Urs Hengartner.

Introduction

Digital rights governance is a broad and diverse field that intersects with a wide spectrum of individuals, communities, and interests. It is little wonder, then, that research and policymaking around digital rights necessitate an interdisciplinary approach. Similarly, the diversity and breadth of digital rights governance and its sociotechnical nature also mean that different methods may be needed, as different ways of knowing may lead to other findings. Yet this relationship — between disciplines and methods — remains relatively understudied, especially as it relates to contemporary scholarship on digital rights governance and internet policy research more broadly.

As such, this paper tries to answer two questions. First, how common is interdisciplinary scholarship in relation to the study of digital rights governance and internet policy and what is its relationship to mixed or multi-methods research? Second, how can interdisciplinary work be further supported through mixed or multi-method research? And third, when is it appropriate to apply these methods, and what are the limitations or challenges in doing so?

As a first approximation, we conducted a scoping review of the most recent five volumes of Internet Policy Review (volumes 10 to 14), coding each research article (n = 141) by the fields or disciplines they are drawing from and the methods used.1We chose to focus on this journal for several reasons: 1) they are explicit about being an interdisciplinary journal and in general do not publish pure single discipline articles; 2) they solicit peer reviews for papers from at least two disciplines; and 3) they are an open access journal (Internet Policy Review About Page, 2025). Our final data set includes the article link, its volume and issue, whether it is interdisciplinary (Y/N), and methodological approach (single qualitative, single quantitative, multi-qualitative, multi-quantitative, mixed methods).

Our findings indicate that the majority of articles are interdisciplinary, which is consistent with the nature of the topic and journal’s focus. 65% of the articles in our data set drew from two or more disciplines/fields. Interestingly, interdisciplinary articles were more likely to use more than one method compared to single-discipline articles, at 62% vs 36% respectively. While we cannot ascertain the direction of causality there does appear at first glance to be a relationship between interdisciplinary work and mixed/multi-method research designs. Overall, these numbers bode well for calls to engage in interdisciplinary research and methods.

Yet, a deeper look at which disciplines and methods are being “combined” also shows that the chosen methods and disciplines of researchers published in Internet Policy Review are not evenly distributed. Most of the interdisciplinary articles, for example, primarily drew from legal studies and policy studies, resulting in legal analysis, along with policy analysis and conceptual analysis being the most common methods employed. When looking at mixed methods (i.e. employing both a quantitative and a qualitative method), there was not much of a difference between single or interdisciplinary articles (19% vs 16% respectively). Quantitative methods, technical, or computational methods were also far less represented in general (6%), whether in single or interdisciplinary articles. This is unsurprising given the fact that many of the authors come from a legal or policy background.

These findings raise questions about the normative and practical demands underlying calls for interdisciplinary research and methods. Which disciplines and why? Who should be collaborating with whom? To clarify, this paper does not contribute to the historical or theoretical debate concerning epistemology and methodologies across various disciplines. Rather, our scoping review provides a broad overview of the methodologies employed in an interdisciplinary journal, discussing the trends, limitations, and gaps in the methodological and disciplinary approaches to contemporary scholarship on internet policy and digital rights governance (i.e. digital harms, privacy, policy, etc). We also concede that our analysis is limited, studying a single journal over just five years. However, we hope that this review will contribute to more reflexive analyses on interdisciplinary internet research.

The remainder of this paper will thus examine the types of mixed methodologies used (i.e., multi-qualitative, multi-quantitative, mixed methods), their relationship with interdisciplinary scholarship, the strengths and weaknesses of each approach, and potential ways to enhance collaboration. We focus on multi- and mixed methods because of their potential to support and enhance interdisciplinary work and because they comprise a majority of our data set. Lastly we close on the constraints and challenges of conducting research that employs multiple methods and disciplines.

A note on definitions

Throughout this paper, we reference a variety of methodological approaches. Qualitative methods refers to research that seeks to explain, explore, inquire, interpret, or describe social phenomena using non-numerical data (Huffman, 2023, p. 5). It takes into account a “​​variety of empirical materials—case study, personal experience, introspection, life story, interview, artifacts, and cultural texts and productions, along with observational, historical, interactional, and visual texts,” (Denzin et al., 2023, p. 95). Quantitative methods instead rely on measuring variables using numerical data to understand or describe phenomena, such as surveys, experiments, statistical analysis, or quantitative content analysis (Scharrer and Ramasubramanian 2021, p. 4–5). Mixed methods refer to research designs that employ a quantitative and qualitative approach, whereas multimethod refers to a combination of qualitative or quantitative methods (i.e. multi-qualitative or multi-quantitative) (Mik-Meyer, 2020). In contrast, a single method refers to a research design that employs just one method, be it qualitative or quantitative. Note that these approaches, whether qualitative, quantitative, or mixed/multimethod, imply any specific type of epistemology, such as positivism or interpretivism (Alharahsheh and Pius, 2020).

Technical analysis refers to the process of examining and evaluating the technical aspects of websites, applications, online systems, or digital platforms. It is not to be confused with financial technical analysis (which studies price charts). Instead, it focuses on how online systems (both hardware and software) are structured, how they function, their vulnerabilities, or strengths. Computational methods generally refer to algorithmic, mathematical, or data-driven techniques to analyse or understand behaviour and content. These may include web scraping, topic modeling, or network analysis (Rath et al., 2019; Abrahams, 2026).

Lastly, interdisciplinary research refers to scholarship that draws from, contributes to, is in conversation with, or is grounded in two or more disciplines, fields of study, or bodies of knowledge (Frodeman, 2010). This is not a neat black-and-white classification, however, and one we dealt with in great difficulty when conducting our review of the articles in the Internet Policy Review. Some disciplines or fields tend to be fairly interdisciplinary to begin with (ex. Science and technology studies), whereas some articles were primarily in conversation with just one discipline yet would still cite from other fields (e.g. Kollnig et al., 2021). In these cases, where is the dividing line between something that is singularly disciplined versus interdisciplinary? For the purpose of our analysis, we deferred to which fields or disciplines the authors said they were grounding their research in or analysed where the bulk of their citations were coming from. However, we acknowledge that the term “interdisciplinary” is not clearcut.

Interdisciplinary research in practice: Methodology and positional reflexivity

Our research team consists of quantitative and qualitative scholars from various disciplines, including engineering, public policy, criminology, political science, and human rights. As female scholars from historically marginalised groups, our experiences influence how we navigate academic research and the publication process. While these experiences provide us with unique perspectives, we also recognise that they can inform how we view the contributions of scholars from dominant disciplines, discourses, and positions of relative privilege.

We are also all affiliated with Canadian and American universities, and we are fortunate to have access to the space and funding to collaborate effectively. Many of us work in labs or academic settings that encourage interdisciplinary research. For instance, two of us are affiliated with the Citizen Lab at the University of Toronto, which has a long-established history of interdisciplinary, mixed methods work. This background informs our beliefs about the viability, accessibility, and value of mixed methods and interdisciplinary work. However, we also acknowledge the challenges of executing this type of research in practice, which we address in the final section of this paper.

Lastly, we want to recognise that within academic circles, there is often an assumption about a scholar’s epistemology and ontology based on whether they identify as a qualitative or quantitative researcher. Qualitative scholars are sometimes stereotyped as being more subjective and interpretivist, while quantitative researchers may be viewed as objective and positivist (Johnson and Gray, 2010). These are often false assumptions, as methodology does not automatically equate to a specific epistemology. Many qualitative scholars seek generalisability and causality, while quantitative scholars are often quite critical of the classification, directionality, and interpretation of numerical data. Despite our own interdisciplinary and mixed methods backgrounds, however, we admit that we, too, can fall for these stereotypes, even as we strive to challenge them. This recognition highlights the potential bias in our preferences for interdisciplinary and mixed methods work, as well as our criticisms of single-discipline or single-method studies.

Combining methods - Approaches, advantages, and tradeoffs

The study of digital rights governance, cybersecurity, internet policy has increasingly embraced interdisciplinary approaches as this special issue highlights through innovative approaches from computational social science to the emergence of science and technology studies in the 1960s (Cutcliffe, 1989; Edelmann et al., 2020). However, as our scoping review indicates, there are still opportunities to enhance collaboration in these fields.

Tables 1 to 4 below summarise the findings of our scoping review. As noted in our introduction, while most articles were interdisciplinary, significantly fewer employed truly mixed methods that combine both quantitative and qualitative approaches; the majority utilised multiple qualitative methods. Furthermore, when articles were interdisciplinary, they typically involved a combination of policy or legal analysis alongside other fields such as communication studies, political science, sociology, or science and technology studies (see Table 3 for most common disciplines and fields referenced).

Regarding methodology, the most frequently used methods were conceptual or thematic analyses, which offer novel typologies, frameworks, or theoretical concepts for further research, such as Juan Ortiz Freuler’s work on state strategies for internet control (2025), and legal analysis, such as Gstrein, Haleem, and Zwitter’s paper on the EU AI Act (2024). This was followed by policy analyses, interviews, and content analysis (see Table 4). Yet quantitative, computational, or technical analyses were less represented in the articles reviewed.

Table 1: Breakdown of interdisciplinary and single discipline articles by methodological approach
  Total Mixed methods Multi qual Multi quant Single qual Single quant
Interdisciplinary 65% (91) 19% (17) 40% (36) 3% (3) 36% (33) 2% (2)
Single discipline 35% (50) 16% (8) 16% (8) 4% (2) 61% (31) 2% (1)


Table 2: Breakdown of methodological approach used across all articles
Method Count Percentage
Single qual 64 45%
Single quant 3 2%
Multi qual 44 31%
Multi quant 5 4%
Mixed methods 25 18%


Table 3: Breakdown of disciplines the articles are drawing from
Legal studies 53 37.59%
Policy studies 40 28.37%
Communication 40 28.37%
Political science 30 21.28%
Sociology 20 14.18%
Science and technology studies 12 8.51%
Information studies 11 7.80%
Computer science 10 7.09%
Economics 7 4.96%
Platform governance 5 3.55%
Table 4: Breakdown of most common methods used
Method Count Percentage
Conceptual or thematic analysis 42 29.79%
Legal analysis 42 29.79%
Policy analysis 26 18.44%
Interviews 21 14.89%
Content analysis 15 10.64%
Case studies 15 10.64%
Document analysis 10 7.09%
Surveys 9 6.38%
Discourse analysis 8 5.67%

Interestingly, studies that are interdisciplinary are more likely to utilise multiple or mixed methods. For example, Carah et al.’s study on hyper-targeted advertising employed a combination of conceptual and theoretical analysis, a case study, and computational methods for data collection and analysis, as well as citizen-science data donation methods (Carah et al., 2024). However, these interdisciplinary papers often include several authors, highlighting the challenges and resource demands of conducting such research. The following sections will explore the potential opportunities and challenges associated with multi and mixed methods research within the context of interdisciplinary scholarship.

Multi-quantitative approaches

Quantitative research investigates the relationships between variables using numbered data and statistical measures. Biases are tested deductively; confounding variables are accounted for; generalisability and replicability are paramount (Creswell and Creswell, 2017, pp. 1-26). Combining multiple quantitative methods allows researchers to augment their work from statistical reporting to multidimensional storytelling, including studies engaging in new forms of causal and intersectional analysis (Creswell and Creswell, 2017).

In our review, single quant studies were the least represented in our findings, making up 2% of studies over the last five volumes. Studies that used single quantitative methods primarily employed survey methods and drew on one to two disciplines in their analysis (Rodríguez-Modroño et al., 2022; Eichhorn et al., 2022; Heidary et al., 2024), On the other hand, multi-quant was 4%. These papers were multi-disciplinary and employed computational and technical methods to achieve their research goals, such as market analysis, meta-data analysis, and statistical analysis to name a few (Seipp et al., 2024; Mehta and Erickson, 2022; Whittaker et al., 2021). These multi-method approaches sought to further inform the research design, and used disciplines such as business, legal studies, and economics to achieve their desired results.

In digital rights governance, multi-quantitative studies contextualise and augment findings within a rapidly changing digital landscape. For example, Kollnig et al. (2021) scanned nearly two million apps for the presence of digital trackers before and after the GDPR, and complimented this with a market analysis of the market shares of tracking and parent companies and popularity of apps among developers and users. This combination of methods allowed insights on the role of trackers in the concentration of power in the information economy along with the (ir)relevance of digital privacy legislation that could not have been achieved using a single method.

The context and nuance provided by combining quantitative methods can also critique prevailing narratives around digital technologies and social contexts, such as misinformation and COVID-19 (Cipers et al., 2023). In another example of multi-quantitative research, Abrahams and Aljizawi (2020) critically studied the presence of coordinated inauthentic ‘bot’ accounts and COVID-19 related misinformation on Middle East Twitter. The authors collected a data set of accounts tweeting about 149 pandemic-related search terms, and triangulated various metrics from relevant literature to study their ‘authenticity,’ including anomalous account birth, account suspension and deletion, and edge-to-node ratio in retweet networks. In contrast to prevailing narratives of an Internet rife with disinformation, the authors found low rates of inauthenticity and little evidence of misinformation, and even among ‘inauthentic’ networks, analysis of randomly sampled accounts suggested authentic campaigning, such as a group of high school students advocating for the cancelation of exams (Abrahams and Aljizawi, 2020). Yet, the authors are aware of and right to draw attention to the limitations of quantitative research: while they employ statistics representing the number of accounts showing inauthentic behaviour, the ground truth authenticity accounts cannot be verified, and there is a risk that these statistics be overvalued or interpreted as objective truth despite their basis in subjective metrics and definitions.

It should therefore be stressed that multi-quantitative studies are not value-neutral or as objective as they may appear. D’Ignazio and Klein’s Data feminism (2020) demonstrates that quantitative data can communicate knowledge that is far from objective, and that objectivity is indeed often not the goal (pp. 73-96), giving examples of data activism and data storytelling such as the Periscopic (2018) visualisation of US gun deaths as years of stolen life and Martinez and Kirchner’s (2021) creative visualisation humanising those impacted by racial bias in mortgage eligibility algorithms. For example, in a study that employed content and metadata analysis of targeted advertising on Facebook during Brexit, the authors, Mehta and Erickson (2022), admit that although they had to rely on Facebook’s Ad Library for their analysis, there were limitations due to how the data was aggregated, ultimately raising concerns about the transparency the Ad Library was supposed to achieve.

In summary, the lack of quantitative and multi-quantitative methods in our scoping review present a rich and underexplored opportunity for the field. With the advent of OSINT, computational social science, and other big data methods, quantitative scholars could contribute a very different perspective to a field that tends towards qualitative research. Noticeable in our scoping review was the lack of economists and behavioural scientists as well as very few occurrences of statistical analyses and survey experiments. However, care should be taken with how their numerical findings are interpreted and the ontology that they are based on. To mitigate potential biases or overgeneralisation, quantitative scholars could seek collaborations across disciplines or with qualitative scholars to achieve a mixed methods approach.

Multi-qualitative approaches

Multi-qualitative methods are a combination of two or more methods such as interviews, case-study, focus groups, digital ethnographies, content analysis, and open-ended questionnaires (Creswell, 2017). In our scoping review’s findings, 77% of journal articles published in Internet Policy Review employed single or multi-qualitative methods including legal analysis, policy analysis, conceptual or thematic analysis, and content analysis (Colangelo and Khandelwal, 2025; Kira, 2025; Gray, 2021). This is not surprising, given that multi-qualitative approaches in research have a history of interdisciplinary success in technological and social science perspectives. Convergent approaches are frequently used within the field of digital rights governance as researchers continue to examine individuals’ or groups’ intersectional and interpersonal relationships with technologies (Gibbs and Hall, 2021).

This was particularly effective when studies paired a policy or legal analysis with a qualitative method like interviews, ethnography, or a focus group, which brought methodological depth to their findings and increased the transparency, validity, credibility, or reliability of their research (Ahmed, 2024; Gibbs and Hall, 2021). For instance, Wang and Sandner (2019) conducted a digital ethnography and interviewed 25 adult women in rural south-China that shared aspects of their everyday lives on WeChat (pp. 324-339). By combining interviews with digital ethnography, the authors provided insight into the complex socio-political landscape and patriarchal elements of China and its cultural norms (Wang and Sandner, 2019, pp.334). Obtaining such rich analysis and thick description may not have otherwise been possible. From our scoping review, however, less than 5% of studies employed ethnography. However digital ethnographies, which adapt traditional ethnographic methods to online communities and environments offer a non-intrusive way to collect authentic responses and real time data while increasing accessibility (Forberg and Schilt, 2023; Gibbs and Hall, 2021, p. 284).

However, researchers may purposely select a single qualitative method for their study, such as interviews to gain an in-depth understanding of their topic, enabling researchers to explore complex phenomena through narrative, attitudes, and beliefs on their subject matter rather than focusing on a causal relationship (Price, Jhangiani, and Chiang, 2016). Furthermore, conducting interviews may reveal the ideologies embedded in participant stories and the broader culture that can create the narratives (Rodriguez, 2016). In our scoping review, interviews represented 14.89% of our coded articles from the past five volumes of Internet Policy Review (e.g., Perarnaud, 2021; Rudnik, 2024).

Other single qualitative studies in our scoping review, instead, employed conceptual or thematic analyses that focused on distinguishing terms and addressing conceptual or theoretical gaps in multi-disciplinary fields as opposed to using methods to measure, identify, or quantify the effect of a particular phenomenon (Benjamin, 2021; Zygmuntowski, Zoboli, and Nemitz, 2021) For example, Schneider (2022) utilises feminist insight and experiences specific to fostering governing spaces on or around digital networks. By applying a feminist lens, the author critiques the need for transparency and accountability among social media companies using several concepts including online economy, social media companies, platform-mediated work, and network infrastructure (pp.12-13).

Yet, despite the comprehensive and descriptive data obtained from interviews or analysis that comes with creating new typologies, frameworks, or theoretical concepts, these single-method studies have their shortcomings. For example, unless future researchers build on or test the theories being generated by these single-method studies, they may remain siloed or find difficulty in generalising their findings, if that is a goal. Similarly, there may be issues in the applicability of their data based on a limited scope, and challenges in measuring the effects or scale of a social problem by using a single method (Polit and Beck, 2010). A way to mitigate these potential drawbacks is to collaborate across disciplines and methodologies, or at the very least, putting it in conversation with scholarship that may come from different epistemological or ontological underpinnings. For example, in our scoping review, there was a noticeable lack of contributions from health policy experts and anthropologists and a limited number of mixed methods research (i.e. quantitative and qualitative methods combined). Scholars who have expertise in conceptual analysis and thick descriptive analysis could collaborate with researchers in these other fields and methodological backgrounds to increase applicability and generalisability.

However, we also acknowledge that limitations persist in employing multi-qualitative methods and scholars must make tradeoffs when designing their research. While including interviews or focus groups with affected populations would complement any mixed or multi-method research programme, these types of methods can be difficult to execute in practice due to schedule setbacks, limited anonymity if the researcher is working with vulnerable populations, and cultural or language barriers that may lead to misunderstandings or misinterpretation (Vredeveldt, Given-Wilson, and Memon, 2023). Meanwhile, qualitative methods that rely on contextual relevance such as content analysis, case-study, or digital ethnographies may limit generalisability as they often include smaller specific samples that do not represent broader populations and can limit the scope and applicability of findings (Lim, 2024). Lastly, researchers interested in studying larger samples may consider converging information from multiple sources and methods as part of triangulation, or if the study is appropriate, may use a mixed methods approach that addresses the limitations of qualitative research for more robust findings (Lim, 2024; Carter, 2014).

Mixed quant-qual approaches (i.e. mixed methods)

Mixed methods research combines quantitative and qualitative methodologies (Creswell and Creswell, 2017) to enhance a study’s depth, validity, and applicability of findings. Historically, research was often divided along quantitative and qualitative lines, with some scholars arguing that these methods were ontologically and epistemologically incompatible due to their different philosophical foundations (Erzberger and Kelle, 2003; Greene et al., 1989). However, other scholars have explored philosophical bases for mixed methods such as dialectical pragmatism (Johnson and Gray, 2010; Biesta, 2010).

Studies exploring digital rights governance, including surveillance, censorship, disinformation, and online harassment, require an interdisciplinary approach (Deibert, 2020; Suryotrisongko and Musashi, 2019). These issues do not reside solely in either technical or social domains; they are hybrid and intersectional in nature. Mixed methods not only facilitate cross-paradigm research but also play a critical role in bridging different disciplines. This opens the opportunity for computer scientists, social scientists, anthropologists, political scientists, and legal scholars to collaborate in one study. This methodological flexibility enables interdisciplinary teams to investigate a complex issue and integrate evidence from different points of view—including technical, forensic, and social—into a more comprehensive study. For instance, attributing targeted spyware attacks against journalists or activists often requires combining forensic analysis (e.g. detecting spyware traces) with qualitative interviews that explore targets: their work, context of activism, motivation and impacts of attacks (Amnesty International and Forbidden Stories, 2021). Similarly, quantitative network analysis may detect coordinated disinformation campaigns, but qualitative discourse analysis can illuminate how these narratives are framed and how they resonate with different communities (Pasquetto, Lim, and Bradshaw, 2024).

We should stress, however, that mixed methods research, like other multimethod research, is not simply a blend of two techniques; rather, it is a strategic approach meant to leverage the strengths of both methodologies while mitigating their respective weaknesses. As mixed methods research evolved, Ivankova et al. (2006) emphasised the need for integration at all research phases, including design, data collection, analysis, and reporting. Building on this foundation, Creswell and Creswell (2017) introduced three key models of mixed methods research design: explanatory (quantitative-led), exploratory (qualitative-led), and convergent (simultaneous and integrative). Choosing one of these models is not random; it should be dependent on a number of factors, including the research question, skills of the research team, access to data, financial resources, and the timeframe of the project (Klassen et al., 2012).

For example, a mixed methods framework can be used to triangulate (i.e. cross-validate findings) and strengthen an analysis. As Johnson and Gray (2010) underline, multiple methods can elucidate a problem from different dimensions. While quantitative methods can uncover scope and patterns, qualitative methods offer in-depth analysis (Creswell and Clark, 2011). In digital rights governance research, triangulation is powerful, especially when moving research beyond technical analysis, for example for attribution, accountability, and informing public policy (Aljizawi, 2025). This was evident in our scoping review, where 28% of mixed methods studies fell under the discipline of policy studies, and 28% fell under law or legal studies. Triangulation can also go beyond validating data by uncovering new insights or conflicts across methods, which may require further investigation (Flick, 2004; Erzberger and Kelle, 2013). For instance, when qualitative data contradicts quantitative data, this does not imply the study is somehow invalid or one set of data is incorrect or misleading. Rather, this tension should be seen as a way to reveal a deeper complexity that should be further explored. In this way, mixed methods can become dialectical, allowing methodological conversation and improvement across paradigms.

Notably, mixed methods also enable researchers to expand the types and scope of research questions. In cybersecurity research, mixed methods help researchers not only to investigate ‘what’ happened and ‘how frequently,’ but also to ask ‘why,’ ‘how,’ ‘to whom,’ or ‘under which conditions.’ For instance, a study of spyware targeting human rights defenders and journalists may uncover the infrastructure of the malicious software and identify the exploits, but interviews with targets can uncover motives, context, and vulnerabilities of targets and impacts of this attack (Aljizawi et al., 2024).

For all the advantages and potential benefits outlined above, only 17.73% of articles were coded as using mixed methods, which suggests that the field of digital rights governance has embraced mixed methods research at least to some degree, but that there are also opportunities for future, innovative research. Due to the relatively small sample size (n = 25), we are unable to make generalisations about which methods or combinations are most prominent in mixed methods research, which is a limitation in this study. However, our data does reveal the extent to which scholars draw from different disciplines to enhance their analysis; which is the basis of our study and one we encourage.True to the spirit of mixed methods research, these studies often brought out the best of combining quantitative and qualitative methods by carefully contextualizing their findings, triangulating across different empirical sources, while capturing complex social, cultural, or technical phenomena. For instance, our review highlighted a study by Ferret (2025), which explored the socio-demographic factors influencing the involvement of moderations and the production of content moderation norms within the French Twitch community (Ferret, 2025). The study examined mechanisms specific to gender, politicisation, social class, and social vulnerability that shape content moderation norms, analysing their variation across different Twitch channels. This was achieved through a combination of statistical analysis, virtual ethnographies, interviews, and questionnaires, allowing for a comprehensive discussion from multiple methodological perspectives (Ferret, 2025). Another example of a mixed methods study using discourse analysis from our review is Han (2022), which combined a critical discourse analysis of popular videos on Douyin with a content analysis of platform boards of directors to explore the “platform-as-daddy” narrative in relation to platform dominance and patriarchy. While we coded this as a single-discipline study within communication studies, it effectively engages other disciplines including political economy and gender studies.

In summary, a mixed methods approach offers a strategic and flexible toolkit for navigating the multi-layered nature of digital rights governance. When used with a clear purpose, theoretical coherence, and careful strategic way, it enables researchers to deepen their analysis and capture the multiple faces and complex digital phenomena. However, the promise of mixed methods can only be fulfilled if its epistemological, practical, and ethical challenges are acknowledged and carefully managed, which we further discuss below.

Constraints and challenges

Although a multi and mixed methods approach offers numerous advantages for studying digital rights governance, it also poses several challenges, including methodological, epistemological, and logistical challenges that require careful thinking to navigate. While these challenges are not unique to mixed methods studies, they are amplified by the need to combine and integrate multiple epistemological paradigms, technical skills, and practical knowledge of human and societal aspects. Therefore, mixed methods research needs careful planning and reflexivity. Otherwise, it can become overextended, under-resourced, or internally inconsistent, which compromises every insight it seeks to dig into.

Resource constraints

Designing a mixed methods study is time-consuming and requires diverse expertise and resources. Almost all of the mixed methods studies in our review, for example, were authored by two or more people, with one being co-authored by nine. However, having an interdisciplinary team is not enough to conduct a successful mixed methods study. For example, changes of platform policies affecting researchers’ access to APIs limit their abilities to access large-scale datasets. Meta and X limiting access to platforms’ APIs have made it more difficult for researchers to obtain data to study censorship or run a social network analysis. Reduced funding also means it's more difficult to bring diverse scholars together.

Institutional limitations

Practical limitations may include limited knowledge in conducting mixed method research in an institutional context. Buchannan and Ess (2009) surveyed 750 US Institutional Review Boards (IRB, also known as Research Ethics Boards or REB) specific to internet research protocols and revealed substantial issues not adequately addressed by the review process for proposed research, including anonymity and confidentiality, security, storage, and research design (Buchanan and Ess, 2009). These findings reveal a broader challenge among institutions specific to applicants' and Research Ethics Boards' appropriate application of methodologies specific to internet research (Buchannan and Ess, 2009; Warfield et al., 2019). For example, Warfield and colleagues (2019) interviewed participants that conducted internet research involving images of bodies and found that there is a “general lack of knowledge about digital research and the different methods” (Warfield et al., 2019, p. 2074). The interviewed researchers claimed REBs continue to grapple with understandings of privacy, what is public and private, and its manifestation in online contexts (Warfield et al., 2019, p. 2074). Moreover, only a few institutions have protocols in place or mentors that can answer questions on digital research (Warfield et al., 2019). In other words, only a few universities and other institutions have designated centres for addressing digital research or scholarship (Warfield et al., 2019). These findings suggest a need for institutions to implement dedicated centres and routine workshops for researchers conducting multi-method research on digital rights governance including legal training for early-career researchers, and workshops on ethics in these unclear environments including ongoing consent and reflexivity, and digital security workshops for conducting research in tense digital environments.

Leadership and management

Beyond selecting members with an appropriate skillset, strong team leadership is required to define the roles and responsibilities of each team member (Brown et al., 2023). Researchers must ensure that the research questions are in harmony with team members’ biographies, past, current, and potential methodological resources, and the wider policy and practical context (Bergman, 2008). It is expected in any research team that tension and disagreement may occur, but it is critical to practice strong leadership skills early on to improve team diversity and curate a research project that tells an accurate account of the phenomenon in question, especially in mixed methods projects. Effective collaboration is key, where the team members have a basic understanding and respect of other collaborators' expertise and work in an environment of consensus around research goals, epistemologies, clarity and support—something not easy to achieve without powerful leadership and effective communication between the team members (Slade et al., 2023).

Analysis and interpretation

Mixed methods research challenges don’t end at the stage of collecting diverse forms of data, but in integrating and interpreting these diverse data meaningfully. Quantitative and qualitative data are often based on different hypotheses and collected for different goals, which leads to possible contradictions that require careful analysis. Quantitative and qualitative scholars may also have very different epistemological underpinnings which can create tension when interpreting the data. This tension between the results of the two methods or two epistemologies should not be considered a failure. Instead, it can provide a new perspective and offer insights into complex socio-technical phenomena such as disinformation and surveillance.

However, integrating the data of both methods or epistemologies to conduct a comprehensive analysis requires interpretive skills and robust skills in both methods (Oranga, 2025). Researchers should decide how to prioritise evidence when findings contradict, and at which phase of the research the integration should occur and why. The lack of established standards for integrating quantitative and qualitative data creates confusion in how findings should be presented or acted upon (Wagner et al., 2012). Furthermore, the teaching methods for qualitative research–and the discipline’s resulting self-conceptualisation–can mirror a craft apprenticeship, differing at times greatly from quantitative research depending on the institutional context (Breuer and Schreier, 2007). These analytical tensions underscore the importance of mixed methods reflexivity and the need to design studies with integration in mind from the outset rather than treating it as a post-hoc synthesis.

Additionally, although mixed methods approaches might be very beneficial for digital rights governance studies, not all research questions require mixed methods, which can lead to combining methods for no clear purpose and produce weak research findings. Therefore, strategic selection is crucial. Researchers need to choose mixed methods only when they truly serve the study and enhance its objectives (Bergman, 2008, pp.11-21).

Perverse incentives and disciplinary stigma

Trends and fads may impact the research design process. Bryman (2008) refers to mixed methods research as becoming “fashionable” and seen as offering “the best of both worlds” (Bryman, 2008, p. 86). These trends or fads are especially relevant in the rise of artificial intelligence (AI) tools on human design teams and underlying personal incentives to conduct mixed methods research. One of the incentives for the ‘kitchen sink’ approach comes from an expectation to apply for grants with “innovative methods”. To be more specific, funders may borrow corporate-world jargon and influence the research design process to compete with a research trend (i.e., web scraping, AI, etc), in order to achieve a desired outcome by indiscriminately combining two or more methods. In other contexts, researchers seeking to triangulate their data for policy or legal implications can result in conflicting, inconsistent or misleading findings specific to internet research (Saks, 2024).

Moreover, mixed methods researchers often face disciplinary biases based on different arguments or assumptions, such as the stereotype that qualitative approaches are soft or subjective, in comparison to the perceived scientific rigour of quantitative methods (Florczak, 2014). These assumptions and stigma can influence funding, opportunities for publication, or even institutional support (Wolfenden, 2019). This is exacerbated by a “culture war” between both research types and declining respect for qualitative methods under modern university management styles (Beuving and De Vries, 2020, pp. 42-66).

Conclusion

In this article, we argue that greater reflexivity is needed regarding which disciplines and methods are combined, why certain approaches prevail, and how mixed or multi-method designs can better support collaboration. In this article, we presented a scoping review of Internet Policy Review’s five recent volumes and assessed that two-thirds of articles draw from multiple disciplines, and interdisciplinary work is more likely than single-discipline work to employ more than one method (62% vs. 38%). To conclude, we suggest that when thoughtfully applied, multi- and mixed methods can deepen analyses and provide a more holistic understanding, while indiscriminately combining methods without clear justification—the so-called ‘kitchen sink’ approach—risks overcomplicating research agendas and diluting their insights. This approach not only strengthens collaboration across diverse fields but increases the quality and transparency of research and broadens contributions engagement in digital rights governance research (Jamieson, Govaart, and Pownall, 2023).

References

Abrahams, A. (n.d.). Social media exposed. No Starch Press. https://nostarch.com/social-media-exposed

Abrahams, A., & Aljizawi, N. (2020, June 5). Middle East Twitter bots and the covid-19 infodemic. Workshop Proceedings of the 14th International AAAI Conference on Web and Social Media. Workshop on Cyber Social Threats (CySoc 2020), US. https://doi.org/10.36190/2020.17

Ahmed, S. K. (2024). The pillars of trustworthiness in qualitative research. Journal of Medicine, Surgery, and Public Health, 2, 100051. https://doi.org/10.1016/j.glmedi.2024.100051

Alharahsheh, H. H., & Pius, A. (2020). A review of key paradigms: Positivism vs interpretivism. Global academic journal of humanities and social sciences, 2(3), 39–43.

Amnesty International. (n.d.). Massive data leak reveals Israeli NSO Group’s spyware used to target activists, journalists, and political leaders globally. Amnesty International.

Benjamin, G. (2021). What we do with data: A performative critique of data ‘collection’. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1588

Bergman, M. (2008a). The practice of a mixed methods research strategy: Personal, professional and project considerations. In Advances in mixed method research (pp. 53–65).

Bergman, M. (2008b). Why do researchers integrate/combine/mesh/blend/mix/merge/fuse quantitative and qualitative research? In Advances in mixed methods research (pp. 86–100). SAGE Publications Ltd. https://doi.org/10.4135/9780857024329

Bergman, M. M. (2008). The straw men of the qualitative-quantitative divide and their influence on mixed methods research. In Advances in mixed methods research (pp. 11–21).

Beuving, J., & De Vries, G. (2020). Teaching qualitative research in adverse times. Learning and Teaching, 13(1), 42–66. https://doi.org/10.3167/latiss.2020.130104

Biesta, G. (2010). Pragmatism and the philosophical foundations of mixed methods research. In A. Tashakkori & C. Teddlie (Eds), Pragmatism and the philosophical foundations of mixed methods research (pp. 95–118). SAGE Publications, Inc. https://doi.org/10.4135/9781506335193.n4

Breuer, F., & Schreier, M. (2007). Issues in learning about and teaching qualitative research methods and methodology in the social sciences. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 8(1). https://doi.org/10.17169/FQS-8.1.216

Brown, S.-A., Sparapani, R., Osinski, K., Zhang, J., Blessing, J., Cheng, F., Hamid, A., MohamadiPour, M. B., Lal, J. C., Kothari, A. N., Caraballo, P., Noseworthy, P., Johnson, R. H., Hansen, K., Sun, L. Y., Crotty, B., Cheng, Y. C., Echefu, G., Doshi, K., & Olson, J. (2023). Team principles for successful interdisciplinary research teams. American Heart Journal Plus: Cardiology Research and Practice, 32, 100306. https://doi.org/10.1016/j.ahjo.2023.100306

Bryman, A. (2008). Why do researchers integrate/combine/mesh/blend/mix/merge/fuse quantitative and qualitative research? In Bergmann, M. M. (Ed.), Why do researchers integrate/combine/mesh/blend/mix/merge/fuse quantitative and qualitative research? (pp. 86–100).

Buchanan, E. A., & Ess, C. M. (2009). Internet research ethics and the institutional review board: Current practices and issues. ACM SIGCAS Computers and Society, 39(3), 43–49. https://doi.org/10.1145/1713066.1713069

Carah, N., Hayden, L., Brown, M.-G., Angus, D., Brownbill, A., Hawker, K., Tan, X. Y., Dobson, A., & Robards, B. (2024). Observing “tuned” advertising on digital platforms. Internet Policy Review, 13(2). https://doi.org/10.14763/2024.2.1779

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41(5), 545–547. https://doi.org/10.1188/14.ONF.545-547

Cipers, S., Meyer, T., & Lefevere, J. (2023). Government responses to online disinformation unpacked. Internet Policy Review, 12(4). https://doi.org/10.14763/2023.4.1736

Colangelo, G., & Khandelwal, P. (2025). The many shades of open banking: A comparative analysis of rationales and models. Internet Policy Review, 14(1). https://doi.org/10.14763/2025.1.1821

Creswell, J. W. (1999). Mixed-method research: Introduction and application. In Handbook of educational policy (pp. 455–472). Academic press.

Creswell, J. W., & Creswell, J. D. (2017). A framework for design. In Research design: Qualitative, quantitative, and mixed methods approaches (pp. 3–26). Sage publications.

Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd edn). SAGE Publications.

Cutcliffe, S. H. (1989). Science, technology, and society studies as an interdisciplinary academic field. Technology in Society, 11(4), 419–425. https://doi.org/10.1016/0160-791X(89)90027-4

Dawadi, S., Shrestha, S., & Giri, R. A. (2021). Mixed-methods research: A discussion on its types, challenges, and criticisms. Journal of Practical Studies in Education, 2(2), 25–36. https://doi.org/10.46809/jpse.v2i2.20

Deibert, R. J. (2020). Reset: Reclaiming the internet for civil society. House of Anansi.

D’Ignazio, C., & Klein, L. (2020). 3. On rational, scientific, objective viewpoints from mythical, imaginary, impossible standpoints. Data Feminism. https://data-feminism.mitpress.mit.edu/pub/5evfe9yd/release/5

Edelmann, A., Wolff, T., Montagne, D., & Bail, C. A. (2020). Computational social science and sociology. Annual Review of Sociology, 46(1), 61–81. https://doi.org/10.1146/annurev-soc-121919-054621

Eichhorn, T., Hoffmann, C., & Heger, K. (2022). “Doing gender” by sharing: Examining the gender gap in the European sharing economy. Internet Policy Review, 11(1). https://doi.org/10.14763/2022.1.1627

Erzberger, C., & Kelle, U. (2003). Making inferences in mixed methods: The rules of integration. In A. Tashakkori & C. Teddlie (Eds), Handbook of mixed methods in social & behavioral research (pp. 457–488). SAGE Publications.

Ferret, N. (2025). The realm of digital content regulation as a social space: Sociogenesis of moderation norms and policies on Twitch platform. Internet Policy Review. https://doi.org/10.14763/2025.1.2004

Flick, U. (2004). Triangulation in qualitative research. In U. Flick, E. Kardorff, & I. Steinke (Eds), A companion to qualitative research (pp. 178–183). SAGE Publications.

Florczak, K. L. (2014). Purists need not apply: The case for pragmatism in mixed methods research. Nursing Science Quarterly, 27(4), 278–282. https://doi.org/10.1177/0894318414546419

Forberg, P., & Schilt, K. (2023). What is ethnographic about digital ethnography? A sociological perspective. Frontiers in Sociology, 8, 1156776. https://doi.org/10.3389/fsoc.2023.1156776

Freuler, J. O. (2025). Infrastructural power: State strategies for internet control. Alexander von Humboldt Institute for Internet and Society gGmbH.

Frodeman, R. (2010). The Oxford Handbook of interdisciplinarity. Oxford University Press.

Gibbs, N., & Hall, A. (2021). Digital ethnography in cybercrime research: Some notes from the virtual field. In A. Lavorgna & T. J. Holt (Eds), Researching cybercrimes (pp. 283–299). Springer International Publishing. https://doi.org/10.1007/978-3-030-74837-1_14

Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3), 255–274. https://doi.org/10.3102/01623737011003255

Gstrein, O. J., Haleem, N., & Zwitter, A. (2024). General-purpose AI regulation and the European Union AI Act. Internet Policy Review, 13(3). https://doi.org/10.14763/2024.3.1790

Hanson-DeFusco, J. (2023). What data counts in policymaking and programming evaluation—Relevant data sources for triangulation according to main epistemologies and philosophies within social science. Evaluation and Program Planning, 97, 102238. https://doi.org/10.1016/j.evalprogplan.2023.102238

Heidary, K., Van Der Rest, J.-P., & Custers, B. (2024). Discrimination grounds and personalised pricing: Consumer perceptions of fairness, norm alignment, legality, and trust in markets. Internet Policy Review, 13(4). https://doi.org/10.14763/2024.4.1809

Huffman, T. (2023). Qualitative inquiry for social justice (1st edn). Routledge. https://doi.org/10.4324/9781003107552

Internet Policy Review. (2025). About. https://policyreview.info/about

Jacob, J., Peters, M., & Yang, T. A. (2020). Interdisciplinary cybersecurity: Rethinking the approach and the process. In K.-K. R. Choo, T. H. Morris, & G. L. Peterson (Eds), National Cyber Summit (NCS) research track (Vol. 1055, pp. 61–74). Springer International Publishing. https://doi.org/10.1007/978-3-030-31239-8_6

Jamieson, M. K., Govaart, G. H., & Pownall, M. (2023). Reflexivity in quantitative research: A rationale and beginner’s guide. Social and Personality Psychology Compass, 17(4), e12735. https://doi.org/10.1111/spc3.12735

Johnson, B., & Gray, R. (2010). A history of philosophical and theoretical issues for mixed methods research. In A. Tashakkori & C. Teddlie (Eds), The SAGE handbook of mixed methods in social and behavioral research (pp. 69–94). SAGE Publications, Inc. https://doi.org/10.4135/9781506335193.n3

Jones, M. O. (2021). State-aligned misogynistic disinformation on Arabic Twitter: The attempted silencing of an Al Jazeera journalist. Open Information Science, 5(1), 278–297. https://doi.org/10.1515/opis-2020-0126

Kira, B. (2025). Regulatory intermediaries in content moderation. Internet Policy Review, 14(1). https://doi.org/10.14763/2025.1.1824

Klassen, A. C., Creswell, J., Plano Clark, V. L., Smith, K. C., & Meissner, H. I. (2012). Best practices in mixed methods for quality of life research. Quality of Life Research, 21(3), 377–380. https://doi.org/10.1007/s11136-012-0122-x

Kollnig, K., Binns, R., Van Kleek, M., Lyngs, U., Zhao, J., Tinsman, C., & Shadbolt, N. (2021). Before and after GDPR: Tracking in mobile apps. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1611

Lim, W. M. (2025). What is qualitative research? An overview and guidelines. Australasian Marketing Journal, 33(2), 199–229. https://doi.org/10.1177/14413582241264619

Martinez, E., & Kirchner, L. (2021). The secret bias hidden in mortgage approval algorithms. The Markup. https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms

Mehta, S., & Erickson, K. (2022). Can online political targeting be rendered transparent? Prospects for campaign oversight using the Facebook Ad Library. Internet Policy Review, 11(1). https://doi.org/10.14763/2022.1.1648

Mik-Meyer, N. (2020). Multimethod qualitative research. In D. Silverman (Ed.), Qualitative research (pp. 357–374). https://www.mik-meyer.com/wp-content/uploads/2025/02/Multimethod_qualitative_research.pdf

Nguyen, D. (2023). How news media frame data risks in their coverage of big data and AI. Internet Policy Review, 12(2). https://doi.org/10.14763/2023.2.1708

Oranga, J. (n.d.). Mixed methods research: Application, advantages and challenges. Journal of Accounting Research, Utility Finance and Development, 3(4).

Pasquetto, I. V., Lim, G., & Bradshaw, S. (2024). Misinformed about misinformation: On the polarizing discourse on misinformation and its consequences for the field. Harvard Kennedy School Misinformation Review, 5(5). https://doi.org/10.37016/mr-2020-159

Perarnaud, C. (2021). A step back to look ahead: Mapping coalitions on data flows and platform regulation in the Council of the EU (2016-2019). Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1566

Periscopic. (2018). US gun killings in 2018. Gun Killings in 2018. Periscopic. https://guns.periscopic.com/?year=2018

Polit, D. F., & Beck, C. T. (2010). Generalization in quantitative and qualitative research: Myths and strategies. International Journal of Nursing Studies, 47(11), 1451–1458. https://doi.org/10.1016/j.ijnurstu.2010.06.004

Price, P. C., Jhangiani, R. S., & Chiang, I.-C. A. (2016). Single-subject research. In Research methods in Psychology. http://solr.bccampus.ca:8001/bcc/file/b58ffd04-ca71-4365-95e1-916f2105bd55/1/OTB027-03-Research-Methods-in-Psychology-2nd-Canadian-Edition.pdf

Radsch, C. C., & Khamis, S. (2013). In their own voice: Technologically mediated empowerment and transformation among young Arab women. Feminist Media Studies, 13(5), 881–890. https://doi.org/10.1080/14680777.2013.838378

Rath, M., Pati, B., & Pattanayak, B. K. (2019). An overview on social networking: Design, issues, emerging trends, and security. In Dey, N., Borah, S., Babo, R., & Ashour, A. S. (Eds), Social network analytics (pp. 21–47). Elsevier. https://doi.org/10.1016/B978-0-12-815458-8.00002-5

Rodriguez, M. C. G. (2016). “The stories we tell each other”: Using technology for resistance and resilience through online narrative communities. In Emotions, technology, and health (pp. 125–147). Academic Press.

Rodríguez-Modroño, P., Pesole, A., & López-Igual, P. (2022). Assessing gender inequality in digital labour platforms in Europe. Internet Policy Review, 11(1). https://doi.org/10.14763/2022.1.1622

Rudnik, A. (2024). Machinery of dissent: Exploring the techno-social practices of modern protests. Internet Policy Review, 13(4). https://doi.org/10.14763/2024.4.1816

Scharrer, E., & Ramasubramanian, S. (2021). Quantitative research methods in communication: The power of numbers for social justice (1st edn). Routledge. https://doi.org/10.4324/9781003091653

Schneider, N. (2022). Governable spaces: A feminist agenda for platform policy. Internet Policy Review, 11(1). https://doi.org/10.14763/2022.1.1628

Seipp, T. J., Helberger, N., De Vreese, C., & Ausloos, J. (2024). Between the cracks: Blind spots in regulating media concentration and platform dependence in the EU. Internet Policy Review, 13(4). https://doi.org/10.14763/2024.4.1813

Slade, E., Kern, P. A., Kegebein, R. L., Liu, C., Thompson, J. C., Kelly, T. H., King, V. L., DiPaola, R. S., & Surratt, H. L. (2023). Collaborative team dynamics and scholarly outcomes of multidisciplinary research teams: A mixed-methods approach. Journal of Clinical and Translational Science, 7(1), e59. https://doi.org/10.1017/cts.2023.9

Sofaer, S. (1999). Qualitative methods: What are they and why use them? Health Services Research, 34(5 Pt 2), 1101–1101.

Suryotrisongko, H., & Musashi, Y. (2019). Review of cybersecurity research topics, taxonomy and challenges: Interdisciplinary perspective. 2019 IEEE 12th Conference on Service-Oriented Computing and Applications (SOCA), 162–167.

The House of Commons Subcommittee on International Human Rights of the Standing Committee on Foreign Affairs and International Development. (2025). Digital transnational repression: Tactics, impacts, and recommendations to combat it. https://citizenlab.ca/wp-content/uploads/2025/01/Aljizawi-WrittenTestimony-DigitalTransnationalRepression-SDIR-2025-1.pdf

Vredeveldt, A., Given-Wilson, Z., & Memon, A. (2023). Culture, trauma, and memory in investigative interviews. Psychology, Crime & Law, 1–21. https://doi.org/10.1080/1068316X.2023.2209262

Wagner, K. D., Davidson, P. J., Pollini, R. A., Strathdee, S. A., Washburn, R., & Palinkas, L. A. (2012). Reconciling incongruous qualitative and quantitative findings in mixed methods research: Exemplars from research with drug using populations. International Journal of Drug Policy, 23(1), 54–61. https://doi.org/10.1016/j.drugpo.2011.05.009

Wang, Y., & Sandner, J. (2019). Like a “frog in a well”? An ethnographic study of Chinese rural women’s social media practices through the WeChat platform. Chinese Journal of Communication, 12(3), 324–339. https://doi.org/10.1080/17544750.2019.1583677

Warfield, K., Hoholuk, J., Vincent, B., & Camargo, A. D. (2019). Pics, dicks, tits, and tats: Negotiating ethics working with images of bodies in social media research. New Media & Society, 21(9), 2068–2086.

Whittaker, J., Looney, S., Reed, A., & Votta, F. (2021). Recommender systems and the amplification of extremist content. Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1565

Wolfenden, H., Sercombe, H., & Tucker, P. (2019). Making practice publishable: What practice academics need to do to get their work published, and what that tells us about the theory-practice gap. Social Epistemology, 33(6), 555–573. https://doi.org/10.1080/02691728.2019.1675098

Zygmuntowski, J. J., Zoboli, L., & Nemitz, P. F. (2021). Embedding European values in data governance: A case for public data commons. Internet Policy Review, 10(3). https://doi.org/10.14763/2021.3.1572

Footnotes

1. Each article was coded by two of the authors. Where there was dispute, a third author would review the article and all three would deliberate until consensus was reached. To identify the methods employed, we used what each author described as their methodology in the article. If the author(s) were not explicit about their methodology, we inferred by reading the article. To identify the disciplines or fields, we used whatever the authors said they were drawing from or grounding their research in. If the authors were not explicit, we looked at their reference list and the authors’ affiliations.