Social media and mental harms under the Digital Services Act
Abstract
Numerous empirical studies indicate that social media use is correlated with, and sometimes might be causing, mental harms like addiction, anxiety and depression, or lowering of cognitive abilities. In 2023, the European Parliament called on the European Commission to introduce new rules to combat these problems. However, it might take years before such new laws are adopted and become applicable. In this article, we demonstrate how a law already in effect – the Digital Services Act – offers the Commission tools necessary to combat certain mental harms stemming from social media’s design and functioning within the ad-based business model. We show that the risk assessment and mitigation obligations addressed at the Very Large Online Platforms’ providers include three “mental goods:” the mental well-being of individuals, mental health (as a component of public health), and the fundamental right to mental integrity. This article offers elaboration and theorisation of these concepts to enable more effective application of the DSA’s requirements, both by providers engaging in risk assessment and the Commission serving as the enforcer.
1. Introduction
For several years now, scholars and policymakers have been sounding the alarm about the potential links between using social media and various types of mental harms (Bhargava & Velasquez, 2021; Hunt et al., 2018; Surgeon General, 2023). Commentators invoke them under the labels of addiction (Bernstein, 2023; Rosenquist et al., 2022), attention depletion (Newman, 2020; Wu, 2019), or mental health issues (Haidt, 2024), often as umbrella terms for wider problems they intuit. Indeed, empirical data suggests that some people are addicted to social media or spend more time on them than they wish (ThinkNow, 2019; Vogels et al., 2022), that social media use is correlated with a lowering of various cognitive capacities (Chiossi et al., 2023, Sharifian & Zahodne, 2020), and it is linked to a higher chance of suffering from mental disorders like depression or anxiety (Keles et al., 2020). Some studies have also provided evidence supporting the claim that there is a causal link between stopping social media use and improved psychological well-being (Hunt et al., 2018; Lambert et al., 2022). In 2023, the European Parliament called on the European Commission to introduce new rules for combating social media addiction and related mental health problems (EP, 2023).
This article, resulting from a collaboration of a lawyer and a psychologist, argues that the recently applicable Digital Services Act (the “DSA”) already offers the European Commission tools to combat many mental harms associated with social media use. We demonstrate how the risk assessment and mitigation obligations (DSA, art. 34-35) addressed at the providers of Very Large Online Platforms (VLOPs) cover mental harms. We do so by pointing out the three “mental goods” protected by art. 34, namely, the mental well-being of individuals, public mental health, and the fundamental right to mental integrity (one of the fundamental rights protected by the Charter). We elaborate on the possible meanings of these concepts, drawing from both legal and empirical literature. Our aim is to assist the VLOPs’ providers and researchers in risk assessment, as well as the independent auditors and enforcers (the European Commission) in oversight and enforcement. This theorisation and operationalisation of the three mental goods protected by the DSA constitutes the primary contribution of this article.
We further argue that to effectively use the tools that the DSA makes available, one first needs to understand exactly how and why social media, as they are today, threaten these mental goods. Our claim is that the risk of various mental harms is inherent in the ad-driven business model that most social media providers rely on (the so-called “attention economy”). It is in the direct interest of social media providers for users to spend as much time as possible using their products, develop habits, and generally find themselves in mental states making one more receptive to advertising (like experiencing specific emotions). In this sense, the interests of users and providers diverge due to the specific business model adopted. Hence, even though we note that mental harms associated with social media use might stem from two primary sources: the behaviour of other users (sharing drastic content, engaging in cyberbullying, spreading misinformation, etc.) and the behaviour of social media providers (aiming to increase engagement and ad efficacy), in this paper, we focus solely on the latter.
The argument proceeds as follows. First, we scrutinise the incentives inherent in social media providers’ business model, namely, to have users spend as much time in-app as possible and various functionalities deployed to achieve this goal. Second, building upon the empirical literature in psychology and psychiatry, we outline a general picture of the mental harms of using social media constructed around these incentives. Third, we introduce the risk management obligations that the DSA imposes on certain social media providers, focusing on the obligations to mitigate risks of mental harms. Fourth, we analyse the possible meanings of the three mental goods protected by the DSA: (a) mental well-being of individuals, (b) public health (including public mental health) and (c) mental integrity (as a fundamental right). Fifth, we further operationalise these mental goods by discussing how the art. 35 requirements for risk mitigation should be understood in their context. Sixth, we discuss the limitations of the DSA as the tool for combating mental harms associated with social media use.
2. Providers’ incentives in the attention economy and social media’s design
To understand how and why social media platforms engage their users and where the systemic risks of mental harm come from, it is necessary to analyse the business model on which their providers rely. Arguably, risks of certain mental harms are not inherent to the product of social media as such but to their providers’ business model (as one of us has argued extensively; see Pałka, 2021, 2025).
The conventional wisdom among many legal scholars holds that users “pay” for social media access with personal data concerning them (De Franceschi, 2022; Hacker, 2019; Metzger, 2017; DCD art. 3.1). This view strikes us as incomplete. Personal data still needs to be monetised, and the most common way is through targeted advertising (Alphabet, 2024; Meta, 2024; Pinterest, 2024). In this business model, the revenue of social media providers depends, in general, on (a) the price they can charge for ad placement and (b) the number of ads they can display. The former links to the effectiveness of ad-delivery systems, which grows with more data amassed (though note the critical assessment in Hwang, 2020). The latter links to the amount of time people spend using the service. Consequently, users can be said to “pay” for social media access by using them (what, admittedly, is counterintuitive; see Rosenquist et al., 2022) and encountering the ads along the way (what, in turn, is quite intuitive and, in itself, not new).
Attention economy is the concept many scholars and commentators have used to describe the socioeconomic reality in which such business models prevail (Newman, 2020; Trzaskowski, 2022; Williams, 2018; Wu, 2016, 2019). Arguably, the attention economy, i.e., the widespread presence of media providers relying on a business model of offering cheap or free products to consumers in exchange for featuring ads, thereby "reselling their attention" to advertisers, predates the rise of internet and social media. Tim Wu tracks its origins back to the 19th century and very cheap newspapers full of ads, and then demonstrates how each next media type, including radio and television, partly relied on the attention economy (2016). Yet, the widespread usage of social media can be deemed to differ from the previous incarnation both quantitatively and qualitatively. Regarding the former, not only do individuals use social media for several hours throughout the day (Statista, 2025) – this was, arguably, true with television as well – but they often open these apps throughout the day, including right after waking up (Tamašiūnas, 2024), when at work (Yu & Cao, 2018), or when commuting (Tommasi et al., 2023). Regarding the latter, as we discuss below, not only do social media providers have a whole range of tools encouraging continued use, but they can also use them in a highly personalised manner relying on vast amounts of data concerning both the targeted user and the population at large (Mik, 2016; Pałka, 2023; Yeung, 2016).
Consequently, in the words of Jack Balkin, the providers of social media have “perverse” incentives “to surveil, addict and manipulate their end users” (Balkin, 2018, p. 1). It is in the providers’ direct interest for users to spend as much time in-app as possible so that corporations can collect more data and display more ads (Bhargava & Velasquez, 2021; Pałka, 2021, 2025). Moreover, they have the technical and design abilities to act upon these incentives. Currently, providers do so by deploying various mechanisms to maximise user engagement and time-on-device (Langvardt, 2019; Ronkainen, 2023). Some design features, like the infinite scroll, exist to keep users from leaving the app (Tortorici, 2020). Others, like well-timed notifications, operate to have users return as often as possible (Pielot et al., 2014).
Generally, the providers’ goal is for users to develop a habit of constantly checking their apps (Ronkainen, 2023), and for this purpose, they use various techniques and employ numerous mechanisms (Bhargava & Velasquez, 2021; Montag et al., 2021). For example, “Pull-to-refresh” features, notification systems and even the temporary intentional suspension of the loading screen (Morgans, 2017) serve to carve out so-called variable reinforcements (i.e., rewards) (Bhargava & Velasquez, 2021). It motivates app use more effectively than a regular schedule of reinforcements (Ferster & Skinner, 1957; Wu, 2016), thus mimicking slot machines (Griffiths, 2018; Williams, 2018). Moreover, an interface based on endless scrolling is a way of presenting content that minimises natural stopping cues that users could use as an opportunity to stop the activity of using social media (Williams, 2018). The extensive functionality linked to the ability to give and receive social reinforcements (e.g., likes, comments) is another example of ways of “addiction” by exploiting the natural human need for social approbation and validation (Sherman et al., 2016). Some of these design choices have been discussed in the rich literature on the so-called dark patterns (Esposito & Ferreira, 2024; Luguri & Strahilevitz, 2021).
Finally and somewhat surprisingly, social media providers also have incentives to curate content in a way that optimises for toxicity; empirical research shows that “a lower exposure to toxic content on social media decreases engagement and post clicks” (Beknazar-Yuzbashev et al., 2024, p. 678). Indeed, there exist categories of speech, usually referred to as “lawful but awful” (Keller, 2022), that might be seen as “toxic” or “harmful,” which, however, neither the law considers illegal, nor the platforms necessarily want to ban, for freedom of expression reasons. In such situations, social media providers face a choice on whether to allow posting such content but to “downrank” or “demote” it (making fewer users see it on their newsfeeds, if any) or whether to amplify it and make more users see it. From the purely economic point of view, unintuitively, the providers have an incentive to do the latter.
Providers’ acting upon these incentives leads to what many users might perceive as harms. Let us now examine the empirical evidence for various types of mental harm stemming from providers’ choices in these areas.
3. Social media design and mental harms
The first and most straightforward potential adverse consequence of social media design is that their users might be spending more time in-app than they would like to. What is the goal of the providers might be considered a harm by the users. For example, 36% of American adolescents believe they spend too much time on social media (Vogels et al., 2022). This problem of too much social media use, sometimes occurring in much more extreme versions, has been discussed by law and policy scholars under the label of “addiction” (Bernstein, 2023; Esposito & Ferreira, 2024; Rosenquist et al., 2022; Zakon, 2020). Obviously, there is a fuzzy line between using social media (much) more than one wants to and being “addicted” to it. Even though social media addiction has not yet been officially recognised as a mental disorder (Shannon et al., 2022), scholars have proposed criteria for such a potential qualification (symptoms of addiction): salience, mood modification, withdrawal symptoms, tolerance, conflict and relapse (Andreassen et al., 2012; Griffiths, 2005). The approach was based on the model of addiction within the biopsychosocial framework (Griffiths, 2005) and draws on existing and long-standing considerations regarding established behavioural addictions, such as gambling, and their diagnosis. Due to the ongoing debate surrounding internet gaming disorder, which has resulted in the inclusion of additional criteria (deception, loss of interest in previously enjoyed activities, continuing an activity despite of problems) (Petry et al., 2014), some researchers also take these additional criteria into account in the context of social media addiction (Van Den Eijnden et al., 2016). Crucially, however, one does not have to meet these clinical criteria to experience having a problem. In research on adolescents, scholars estimate that roughly 7% (across 29 countries) engage in problematic social media use as understood by the clinical criteria (Domoff et al., 2022). At the same time, 57% of British girls and 37% of boys agree with the statement, “I think I’m addicted to social media” (Devlin, 2024). It is worth mentioning that in the context of the problem of excessive social media use, there’s also an opportunity cost (Buchanan, 1991); time spent scrolling one’s phone cannot be spent on other things like sleep, play (positively correlated with mental well-being, Haidt, 2024), or anything else (Turel & Serenko, 2012).
Second, excessive use of social media – i.e., the state that providers want to achieve – is positively correlated with other (than addiction) mental disorders and some of their symptoms, e.g., heightened symptoms of anxiety (Schou Andreassen et al., 2016; Vannucci et al., 2017; Woods & Scott, 2016), depression (Keles et al., 2020; Shensa et al., 2017), eating disorders (McLean et al., 2015; Santarossa & Woodruff, 2017), body image concerns (Fardouly et al., 2018), or even self-harm (Memon et al., 2018). There are many mechanisms through which the overuse of social media can cause the indicated mental issues, e.g., by reducing the quality of relationships with close ones (e.g., due to phubbing; Vanden Abeele et al., 2019), intensifying contact with specific, potentially harmful content (e.g., idealised beauty standards), or limiting other developmental activities, such as education or hobbies. Again, not everyone experiencing depressive mood or occasional anxiety will qualify as suffering from mental disorders like recurrent depressive disorder (World Health Organization, 2022a; 6A71) or generalised anxiety disorder (World Health Organization, 2022a; 6B00). Nevertheless, from the user’s perspective, the experience of harm might often occur long before a psychiatric diagnosis is warranted, and the symptoms experienced by users on a massive scale can be a problem from a public health perspective. Importantly, it might not always necessarily be the case that excessive use causes mental health problems; possibly, users who suffer from depression or anxiety deploy social media as a coping tool (or the influence can occur both ways, in a vicious cycle, Brewer, 2021; Wolfers & Utz, 2022). This said, the first studies documenting the causal effect of social media usage on mental health – where the experimental group whose use of social media was decreased improved on mental health metrics vis a vis the control group – have been published (Hunt et al., 2018; Lambert et al., 2022). Moreover, Twenge and Haidt, having analysed all (in their view) the possible causes of the ongoing mental health crisis in adolescents, concluded that social media use is the best possible explanation (Haidt, 2024; Twenge, 2023).
Third, social media users might experience negative emotions and unwanted mental states due to the functioning of the algorithmic systems. As noted above, empirical research demonstrates that decreasing toxic content decreases user engagement (Beknazar-Yuzbashev et al., 2024). Further, in an infamous study published in 2014, researchers cooperating with Facebook showed that using the platforms’ recommender systems, they could change users’ emotions from positive to negative merely by tweaking what content to display (Kramer et al., 2014). There is a possible economic reason for doing so; experiencing sadness or anxiety makes one more willing to pay a higher price for an advertised product and increases willingness to consume certain types of goods (Garg & Lerner, 2013; Kemp et al., 2013). Sometimes, it is in the providers’ interest that users experience negative emotions. Sometimes, it is only a consequence of using interest-maximising algorithms. Research by Amnesty International in collaboration with the Algorithmic Transparency Institute (ATI) found that TikTok users who showed a one-time interest in sad content were intensely exposed to a huge amount of sad content, particularly within the ‘For You’ page. At the same time, young people interested in mental health were relatively quickly steered by the algorithm towards potentially harmful content (e.g., romanticising depressive thoughts or self-harm) (Amnesty International, 2023).
Fourth, some studies suggested that various online services, including social media platforms, can impair one’s cognitive abilities (Chiossi et al., 2023; Firth et al., 2019; Sharifian & Zahodne, 2020). Some scholars argue that social media are an important part of the modern online world that is responsible for reducing attention span (Mark, 2023). Research shows that the mere presence of a smartphone on one’s desk adversely affects their working memory capacity and functional fluid intelligence, even if a user manages to resist the temptation to look at the smartphone (Ward et al., 2017). Continuous use of social media has been shown to be linked with lower attention capacity and lower ability to solve numerical tasks (Hadar et al., 2017). Studies also suggest that social media use might adversely affect short-term memory in older adults (Sharifian & Zahodne, 2020) while TikTok, due to the specific nature of content displayed, may reduce prospective memory abilities (while other social media do not) (Chiossi et al., 2023). It should be noted that the results of studies on the relationship between social media use and cognitive decline should be interpreted with caution, as not all studies confirm harmful effects (Lara & Bokoch, 2021). Furthermore, establishing causality of this relationship is difficult, especially since many studies are correlational in nature. At the same time, given the highly persuasive nature of social media platforms and applications, such risks should be treated as real.
All of these harms are problematic from individual and collective perspectives. For individual users, experiencing negative emotions, mental disorders, cognitive impairments and excessive social media use might lower quality of life, psychological well-being and productivity. From the collective perspective, these problems not only directly translate into lower aggregate economic output and higher healthcare costs but can also be observed in less tangible areas. For example, scholars have coined the term “phubbing” to describe the practice of using one’s smartphone when engaging in social interactions, e.g., when a parent is scrolling Instagram while (not) looking after a child, or when one partner keeps checking their phone during a meal with the other one (Vanden Abeele et al., 2019). Consequently, such harms easily externalise and affect larger social systems (even non-directly), somewhat analogously to passive smoking.
Given all this evidence, as indicated in the introduction, calls for regulation have been made (EP, 2023). Even if European policymakers decide to pass new laws on these matters, it will be years before they become applicable. Yet, some tools are already available. In the remainder of this paper, we analyse how one such tool – the DSA – can be employed to combat the mental harms discussed above.
4. Systemic risks of mental harm within the DSA’s logic
The DSA is a complex and horizontal regulation; scholars have already offered excellent overviews of its logic and contents (Broughton Micova & Calef, 2023; Broughton Micova, Schnurr, Calef, & Enstone, 2024; Busch & Mak, 2021; Cauffman & Goanta, 2021; Gregorio & Dunn, 2022; Husovec, 2024; Söderlund et al., 2024, 2024; Turillazzi et al., 2023; van Hoboken et al., 2023). In this piece, we focus solely on these elements of the DSA which, in our view, might be directly helpful in combating mental harms stemming from social media’s business model. These are the risk assessment (art. 34) and mitigation (art. 35) obligations imposed on the Very Large Online Platforms (i.e., VLOPs) and Search Engines (i.e., VLOSEs), i.e., the services with more than 45 million active monthly users in the Union (art. 33.1). Currently (in 2025), 23 VLOPs operate in the EU, including some prominent social media, i.e., Facebook, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, X (formerly Twitter) and YouTube (European Commission, 2025).
Providers of VLOPs must conduct risk assessments (art. 34) and implement mitigation measures for the risks identified (art. 35). The DSA requires that they identify and analyse systemic risks stemming from the design, functioning, or use made of the service (art. 34.2). It further identifies four high-level categories of risk: (1) dissemination of illegal content; (2) negative effects on fundamental rights; (3) negative effects on civic discourse, elections, or public security; (4) risks concerning gender-based violence, public health, protection of minors and serious negative consequences to physical and mental well-being of individuals (art. 34.1.).
From the list of risks that should be assessed and mitigated under the DSA, one can deduce a long list of “goods” that the DSA protects. Within, three “mental goods” can be identified:
- Mental well-being of individuals (explicitly mentioned in art. 34.1.4);
- Public mental health (as an element of public health, art. 34.1.4);
- Fundamental right to mental integrity (not mentioned explicitly, but included in the fundamental rights (art. 34.1.2), as the Charter in art. 3 guarantees it).
We discuss the meaning of each of these “mental goods,” as well as the various ways in which risks to them can materialise, in the following sections. The important takeaway, for now, is that systemic risks to each of these goods must be identified and mitigated by the VLOPs’ providers. Regarding risk identification, the regulation underscores that providers should account for the influence on the systemic risks of the following factors: recommender systems and algorithmic systems, content moderation, enforcement of terms of service, and ad-delivery systems and data practices (art. 34.2.). If a service provider of a VLOP identifies a systemic risk, they should “put in place reasonable, proportionate and effective mitigation measures” (art. 35.1), tailored to the specific risk identified. The DSA lists possible examples of such mitigation measures, including changes to the design or interface (art. 35.1.a), to the algorithmic (art. 35.1.d) and ad-delivery systems (art. 35.1.e) and raising awareness about the risks among the users (art. 35.1.i).
Within this logic, the primary decision-makers when it comes to identifying the risks and choosing the most appropriate mitigation measures are the VLOPs’ providers themselves. However, and very importantly, the European Commission (EC) oversees the compliance of the VLOPs’ providers with these requirements (art. 65). It has the power to request information from the providers (art. 67), conduct inspections (art. 69), order interim measures and accept commitments (art. 70-71) and ultimately fine the providers up to 6% of their total worldwide annual turnover in the preceding financial year (art. 74). In essence, the DSA tasks the corporations with diligently identifying the risks and effectively mitigating them, but the European Commission can check whether they have done so properly. This means that the EC would receive the yearly reports and either assess whether everything is fine, determine whether a specific risk has (not) been identified, or determine that the proposed mitigation measures are (in-)sufficient. As of the time of writing of this article, it has opened several investigations, including against social media, namely two against TikTok (European Commission, 2024a, 2024b) and two against Meta (European Commission, 2024c, 2024d).
As the list of goods protected by the DSA is long, one can easily imagine some of them receiving less attention than others. Hence, in the remainder of the paper, we take a close look at the three mental goods protected by the DSA, discuss the potential sources of systemic risks to them, and discuss preliminary ideas regarding these risks’ mitigation. We hope that this exercise will be useful for the VLOPs’ providers themselves, the auditors, as well as the European Commission, engaging in oversight and enforcement.
5. Mental well-being
The mental well-being of persons is the first “mental good” protected by the DSA. The regulation’s art. 34.1.d obliges VLOPs’ providers to identify and mitigate risks of “any actual or foreseeable negative effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being.” In this section, we discuss how the notion of well-being could and should be understood, what its relation to public health is, and which of the mental harms discussed in Section 3 could be perceived as (materialisations of) risks to mental well-being.
It is worth pointing out that the concept of mental well-being is closely related to the concept of mental health, which will be discussed in the following section as a component of public health. This close relation is well illustrated by the definitions of both concepts used by the World Health Organisation (WHO). The WHO defines mental health by referring to the concept of well-being: “a state of mental well-being that enables people to cope with the stresses of life, realize their abilities, learn well and work well and contribute to their community.” (World Health Organization, 2022b). At the same time, as part of its definition of well-being, the WHO compares it to health: “Well-being is a positive state experienced by individuals and societies. Similar to health, it is a resource for daily life and is determined by social, economic and environmental conditions.” (World Health Organization, 2021). Although these concepts are highly congruent, they are also partially distinct (Keyes, 2002; McAneney et al., 2015). Some studies show how the presence of a mental disorder (more often understood as a component of mental health) does not necessarily equal the absence of well-being (more often understood as a subjective feeling of happiness and satisfaction with life) and, conversely, the absence of a mental illness does not necessarily translate into experiencing well-being (Weich et al., 2011).
In the psychological discourses, well-being is most often framed in two ways: subjective well-being and psychological well-being. The concept of subjective well-being emphasises the hedonic understanding of happiness; well-being is the subjective feeling of satisfaction and enjoyment of life (Chen et al., 2013). The notion of psychological well-being is used to describe well-being understood in eudaimonic terms, that is, “the striving for perfection that represents the realisation of one's true potential” (Ryff, 1995, p. 100). The paradigm of psychological well-being has resulted in the creation of theoretical models proposing specific components that determine well-being. Comparing the two approaches, it can be said that models of psychological well-being aspire to determine the objectified components of well-being and a good life, while research on subjective well-being focuses on identifying the factors that cause people to declare a subjective sense of happiness. From the perspective of considering possible paths of DSA application and enforcement, the concept of psychological well-being seems more useful due to its objectified and more structured approach. At the same time, in our view, the adoption of this concept does not need to result in the exclusion of important factors of subjective well-being. The threats to psychological well-being arising from social media use, discussed below, are valid for both perspectives, as the perspectives overlap in part on what they consider to constitute well-being (Ryan & Deci, 2001). For example, the psychological well-being model discussed below includes a component of close relationships with others, and research in the subjective well-being paradigm shows that satisfying family relationships correlate positively with reported happiness (Diener et al., 2018).
The model of psychological well-being commonly accepted by psychologists and operationalisable for the purpose of DSA enforcement has been proposed by Ryff (1989) and further validated by numerous studies (Henn et al., 2016; Ryff & Keyes, 1995; van Dierendonck et al., 2008). This model considers the following components: (1) self-acceptance (positive attitude toward the self); (2) positive relations with others (having satisfying and warm relationships with people); (3) autonomy (being self-determining, independent and able to resist social pressure to think and behave in certain ways); (4) environmental mastery (having a sense of competence and mastery in managing the environment); (5) purpose of life (having goals in life and sense of directedness); (6) personal growth (having a feeling of development and sense of realising one’s potential). Based on Ryff's model, it can be seen that at least several of the components of well-being she mentions may be negatively affected to some degree by social media use, even if one ignores the obviously well-being-degrading experiences of mental disorders (described in the next section in the context of public mental health). For example, regarding self-acceptance, the threat may lie in social media's promotion of unrealistic beauty standards, with the help of beauty content-promoting algorithms and beauty filters, among others (Fardouly et al., 2018; Rowland, 2022). According to Meta’s internal research, a third of British and American teen girls declare that using Instagram worsens their satisfaction with their bodies (Wells et al., 2021). Further, the aforementioned phenomenon of phubbing illustrates well that social media, in addition to enabling the realisation of social interactions, also has the potential to deteriorate intimate relationships in offline reality (Vanden Abeele et al., 2019; Wang et al., 2017), which strikes at the substantial component of well-being, which is close and warm relationships with others. A threat to the sense of autonomy and agency may be the addictive design that makes many users feel like they are using social media more than they would like to, and therefore, their control over their own behaviour is hindered to some degree.
Of all the legal goods discussed, the concept of well-being emphasises to the greatest extent that users should be protected not only from mental harm, such as officially recognised mental disorders, but also from the loss of a satisfying quality of life or a sense of fulfilment. At the same time, the DSA uses this notion in the context of the phrase “serious negative consequences for physical and mental well-being of individuals,” which suggests that when it comes to an individual's well-being, protection should not address all harms to well-being caused by the use of social media, but only to those significantly disrupting it. For example, one could argue that merely making some users experience more negative emotions, as was done in the study by Kramer and colleagues (2014), even if it constitutes a negative consequence to mental well-being, should not yet be seen as a serious one (as people becoming sad is a widespread and natural occurrence in daily life). However, we argue that many mental problems that the literature associates with social media use, developing over long periods of time, can be considered as seriously threatening the well-being of individuals. Therefore, their emergence should be the subject of VLOPs’ risk assessment and the implementation of mitigation measures. Put differently, even though the DSA’s text is insufficient to in abstracto delineate the line between serious and unserious harms to mental well-being, the VLOPs’ providers should be the ones required to argue that harm is not serious enough to be mitigated, as problems that in isolation might seem unserious can, over time, constitute risk factors for much more serious ones. In this sense, one must analyse the impacts on mental well-being and public mental health jointly, and if the impact on the former can lead to a negative impact on the latter, it should, in our view, be considered serious.
6. Mental health (as public health)
The second mental good protected by the DSA is mental health as a component of public health. Art. 34.1.d obliges the providers of VLOPs to identify and mitigate the risk of “any actual or foreseeable negative effects in relation to (…) the protection of public health.” What is public health? Many definitions have been proposed (see the scoping review in Azari & Borisch, 2023), though this seems to result from the fact that public health is both (i) the science of protecting the health of populations (and the social effort based on that science) and (ii) one of the societal goals (the state of healthiness in which public wants to find itself). Childress and colleagues proposed the following characterisation:
Public health is primarily concerned with the health of the entire population, rather than the health of individuals. Its features include an emphasis on the promotion of health and the prevention of disease and disability (…) (Childress et al., 2002).
In this sense, public health is a collective and paternalistic good – in contrast to the concept of mental well-being, which, whether we use the paradigm of psychological or subjective well-being, is individualistic in nature. Public health is collective because its goal is the overall health of the population rather than of any specific person. It is paternalistic because objective ways of measuring one’s health exist (even if based partly on individuals’ subjective assessment). That mental health makes up an important element of public health is, at this point, beyond controversy (Herman & Jané-Llopis, 2005; Prince et al., 2007; Wahlbeck, 2015), even if some national constitutional courts still hesitate to assert that the right to health unequivocally includes mental health (Bublitz, 2020b, p. 388). Wahlbeck writes:
Public mental health deals with mental health promotion, prevention of mental disorders and suicide, reducing mental health inequalities and governance and organization of mental health service provision (Wahlbeck, 2015, p. 36).
Crucially for this paper’s argument, the use of VLOPs might in itself (regardless of the problems with the content presented on social media, for example, cyberbullying or health-related disinformation campaigns) present a risk to public health, more specifically mental health, on the population level (Haidt, 2024; Twenge & Hamilton, 2022; Udupa et al., 2023). As indicated in section 3 above, there exists robust evidence of a correlation between social media use and negative mental health metrics (Keles et al., 2020; Vannucci et al., 2017; Woods & Scott, 2016) and a growing body of empirical evidence for the causal claim (Haidt, 2024; Hunt et al., 2018; Lambert et al., 2022; Twenge, 2023). That the phenomena discussed in the literature are a problem from a public health perspective is understandable given the range of influence that VLOPs have – all the risks posed by using social media have an impact on a mass scale, thus constituting environmental risk factors. At the same time, some of the negative phenomena indicated in the studies, even without the argument of “massive scale,” have a high potential to affect the entire social system. For example, even though advanced stages of mental disorders such as depression or eating disorders do not affect the general user population, they still pose a serious challenge to public health and public policies, contributing, for example, to increased mortality from suicide (Holma et al., 2010) or significant health deterioration (as in the case of anorexia nervosa disorder, Arcelus et al., 2011). Another example of a threat to mental public health is the previously mentioned phenomenon of phubbing - ignoring close people as a result of social media use. The distinction (in the scientific literature) of this phenomenon (Vanden Abeele et al., 2019) and identifying its negative consequences for close relationships (Wang et al., 2017) shows that social media overused by an individual has the potential to negatively affect the entire family and social system of the user regardless of whether their close family members share the tendency to use social media in such a way.
Consequently, under the DSA art. 34.1., the risk that needs to be managed is the possibility that the design, functioning, or use of a VLOP creates an environmental risk factor in the pathogenesis of mental health problems. This means that the aforementioned mental harms connected with social media use – as harms that have the potential to occur on a large scale and affect the entire social system – should become the subject of risk assessment and the implementation of mitigation measures.
7. The fundamental right to mental integrity
The final mental good protected by the DSA is the fundamental right to mental integrity. Although not invoked explicitly, it is protected within the category of “any actual or foreseeable negative effects for the exercise of fundamental rights” (art. 34.1.b) as it is listed in the Charter of Fundamental Rights of the European Union (“the Charter”). The Charter’s art. 3 reads:
- Everyone has the right to respect for his or her physical and mental integrity.
In the fields of medicine and biology, the following must be respected in particular:
a. the free and informed consent of the person concerned, according to the procedures laid down by law; (…).
Unlike the two mental goods discussed above, mental integrity is not a term of the art used in empirical disciplines like psychology or psychiatry. It is a normative concept currently being developed by lawyers and philosophers (Douglas & Forsberg, 2021; Istace, 2023). Its precise legal meaning is not yet fully established, as the courts have seldom invoked it directly (Bublitz, 2020a; Istace, 2023). However, normative scholars have contributed to the development of the concept. It is often discussed in connection with neurotechnologies potentially enabling direct interference with the functioning of one’s brain or mind (Bublitz & Merkel, 2014; Ienca & Andorno, 2017; Lavazza & Giorgi, 2023). Bublitz and Merkel, distinguishing between violating one’s bodily and mental integrity, provide examples of the latter. These would include, among others, secretly adding an otherwise harmless substance eliciting hunger to a welcome drink or covertly spiking employees’ beverages with a chemical that increases their cognitive capacities (Bublitz & Merkel, 2014, pp. 58–59).
Informed consent plays a central role in the concept of mental integrity. What Bublitz’s and Merkel’s examples have in common is not the presence of any material harm – increased work performance might even be a good thing – but interference with one’s mind’s functioning without the person’s comprehension and permission. The importance of consent is underscored across the scholarship (Douglas & Forsberg, 2021; Lavazza & Giorgi, 2023) and highlighted by the art. 3.2.a of the Charter. In this vein, Lavazza formulated the following definition:
Mental Integrity is the individual’s mastery of his mental states and his brain data so that, without his consent, no one can read, spread, or alter such states and data in order to condition the individual in any way (Lavazza, 2018, p. 4).
In this sense, mental integrity is a liberal and individualistic concept, more procedural than substantive. It delegates the decision on what is permissible to the individual (within the boundaries set by the law). An act or a practice interfering with one’s mental state would not be judged by its consequences (positive or negative) but on formal (procedural) grounds. The test question would be: has the person whose mental state is being altered given free and informed consent to such an alteration? Consequently, under the DSA art. 34.1.b., the risk that needs to be managed is the possibility that the design, functioning, or use of a VLOP leads to an alteration of a user’s mental state without his or her consent.
An example of such interference can be found in a 2014 study published by researchers cooperating with Facebook, hinted at above, in section 3. Kramer and colleagues, with the assistance of the platform’s provider, conducted an experiment on almost seven hundred thousand people, testing whether their emotional states can be modified from positive to negative (and vice versa) through exposure to specific content on their newsfeeds. The answer was affirmative. In the words of the study’s authors:
We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient) and in the complete absence of nonverbal cues (Kramer et al., 2014, p. 8788).
Unsurprisingly, the experiment led to a public scandal; the company apologised but claimed that everything was legal, as such use of users’ data was compliant with its terms of service (Meyer, 2014). Scholars offered various accounts of why this was ethically unacceptable (boyd, 2016; Selinger & Hartzog, 2016). As of today, the mere possibility of such an experiment taking place without users’ consent could violate art. 34.1.b. DSA.
The one significant complication stems from the question: What alterations of mental states by the VLOPs’ providers require the users’ consent? It seems that alteration of mental states is an unavoidable consequence of participating in social life. Scholars discussing mental integrity in the context of neuroimplants proposed a distinction between technological and verbal means of alteration (Douglas & Forsberg, 2021, p. 186). Bublitz made a similar argument, distinguishing between direct and indirect alterations, the latter stemming from sensory inputs and considered lawful (Bublitz, 2020b). Put simply, changing one’s mental state through a brain implant requires consent to be lawful while doing so by telling them something does not.
If this was indeed the test, it would be difficult to imagine VLOPs’ providers – at least for now – violating one’s right to mental integrity. However, bearing in mind the results of the Kramer et al., 2014 study, the risk of platforms abusing their ability to influence users with appropriately targeted algorithms must be considered. Although currently, content recommendation systems seem to work in a manner predictable to most users, one cannot exclude a scenario in which they are modified to even further maximise the platforms’ interests and earning potential while interfering with the mood or attitude of users in ways beyond what they typically expect. Again, the texts of the DSA or the Charter are insufficient to authoritatively state where the line between violation of the fundamental right to mental integrity and acceptable consentless modification of mental states lies. However, we firmly believe that this is a risk that the DSA requires the VLOP providers to manage. Researchers working on social media threats generally do not have access to data on platforms that would enable them to prejudge mental integrity interference according to the above understanding of it. However, service providers do, and it will also be possible for the Commission equipped with investigative powers (art. 67-69 DSA).
8. From theory to practice: What providers and enforcers should check and do
In the sections above, we have discussed the various mental harms that users of social media might suffer, the sources of these risks, as well as the possible meanings of the three mental goods protected by the DSA. In this final section, we operationalise these theoretical and conceptual considerations in the form of concrete steps that the VLOPs’ providers, the auditors, and the European Commission might take.
First, and most fundamentally, we argue that the VLOPs’ providers should acknowledge that their ad-driven business model (i.e., their participation in the so-called “attention economy”) poses inherent systemic risks to the three mental goods protected by the DSA: the mental well-being of individuals, public mental health, and the fundamental right to mental integrity. The existing scientific literature in psychology, psychiatry, and public health might not be sufficient to unequivocally prove that ad-funded social media platforms cause all the mental harms discussed in the previous sections, but, in our view, it is more than sufficient to acknowledge that such a risk is plausible and must be assessed by the VLOPs’ providers. The failure to acknowledge and discuss these risks, or confusing them with risks occurring merely on the level of other users’ behaviour, should be seen as a violation of art. 34 and 35 DSA.
Second, we argue that the risks to various mental goods, stemming from different design choices, should be discussed separately, as they originate from distinct sources and may require distinct mitigation measures. For example, the risk assessments should address questions like: Can the user interface (like infinite scroll), or the functioning of algorithmic systems (including the timing of notifications or the choice of content to display) lead to lowering users’ self-acceptance (e.g., through facilitating constant comparisons), worsening relations with others (e.g., due to phubbing), or decreasing the sense of autonomy (e.g., due to “feeling of addiction”), etc.? These are the risks to the mental well-being of individuals. Regarding public mental health, one should inquire into whether the same sources of risk (user interface, functioning of algorithmic systems, etc.) can lead to occurrences constituting possible environmental risk factors in the pathogenesis of mental disorders like recurrent depressive disorder, generalised anxiety disorder, body dysmorphia, or eating disorders. These are the risks to public health. Finally, VLOPs’ providers should inquire whether the same sources lead to alterations of mental states, like new cravings (e.g., to check the app constantly), changes in emotions (e.g., excitement, sadness, or anxiety), cognitive impairments (like proneness to distractions, lower ability to focus), or exposure to risk factors regarding the pathogenesis of mental disorders, that the users cannot be presumed to have given free and informed, even if implicit, consent to? Conversely, if the providers believe that such modifications do not require informed consent, they should explicitly write that in their risk assessments. These would be the risks to the fundamental right to mental integrity. The failure to engage the literature indicating the possibility of such harms occurring should, in our view, be seen as a violation of art. 34 and 35 DSA.
Finally, if answers to the questions from the former paragraph, in any aspect, are affirmative, the VLOP providers should demonstrate what risk mitigation measures they have taken. These will differ depending on the mental good and the source of the risk, and should be both tailored to the specific risk and accompanied by an explanation why the providers believe they are “reasonable, proportionate and effective.” The measures aimed at giving users more control (increasing autonomy) could include those for managing time spent within the service (like an always-visible little timer indicating number of minutes spent today or an option to create time restrictions) or creating tools enabling the users to easily choose what kind of content to curate for in a given moment (“now show me funny stuff, work-related stuff, friends’ life updates, news, etc.”). The measures aimed at raising awareness could include more explicit contract terms, like “we do not charge you money, but do use all the following tools to have you spend as much time looking at ads as possible”) or warnings whenever one opens the app, e.g., “some studies indicate that using our product might lead to mental health problems” (as the former U.S. Surgeon General suggested; Murthy, 2024). These are just examples; what is important, however, is that the VLOPs’ providers demonstrate exactly which measures are employed to mitigate which risks.
9. The DSA’s limitations vis-à-vis mental harms
Of course, the DSA is not a silver bullet, and it will not, on its own, suffice to solve all the possible mental harm problems in the attention economy. First, the art. 34-35 requirements for risk assessment and mitigation apply only to Very Large Online Platforms and Search Engines. Possibly, once regulatory know-how is amassed after several years of the DSA’s application, specific requirements for smaller platforms might need to be considered.
Second, just as every regulation grounded in the logic of risk mitigation, the DSA has all the limitations inherent in this regulatory technique. As Margot Kaminski points out (2023), risk regulation neither addresses the broader societal question of whether a specific technology or business model should be allowed to be adopted in the first place, nor is it well-suited for making whole the individuals who have already suffered damages. The former requires much deeper political decisions, while the latter might call for a potential rethinking of the tort law rules, which have traditionally been very cautious in awarding damages for purely psychological harms (Pałka, 2024).
Third, the DSA does not create private causes for action, relying instead on public administrative enforcement. Such an enforcement structure might work well under certain conditions, but, as seen in the examples of consumer law (Micklitz & Saumier, 2018) or data protection law (Gentile & Lynskey, 2022), it faces its own limitations. Without a private litigation counterpart, ideally through some sort of collective action mechanism, the enforcement bodies (in this case, the European Commission) face a gargantuan task, requiring significant personnel resources to accomplish. This task may become even more challenging given the broader geopolitical context.
Nevertheless, we believe that a wise approach to the DSA’s enforcement might be a solid first step in mitigating the risks of mental harms resulting from social media use. Unlike regulations that may be introduced in the future, this law is already in effect. We believe the time is right to utilise it.
10. Conclusion
As we have argued in the paper, there exists a robust scientific literature to support the claim that social media use poses a risk of several mental harms, including overuse (and sometimes addiction), symptoms of other mental disorders (including anxiety, depression, eating disorders), impairments to cognitive functions, and generally negative mental states. These risks might warrant passing additional regulations, as requested by the European Parliament. However, as new regulations may take years to come into effect, our goal in this article was to demonstrate how an existing law – the Digital Services Act – can be used to combat mental harms, or at least mitigate the risks of them materialising.
We have outlined how several types of mental harms result not from the inherent features of social media as such, nor solely from the conduct of other users, but rather from the design and algorithmic practices deployed by the VLOP providers to maximise profit. We have further discussed the possible meanings of the three mental goods protected by the DSA – mental well-being, public health and the fundamental right to mental integrity – and how they could be operationalised for the purposes of the DSA compliance, oversight and enforcement. Our aim was to provide theoretical assistance to VLOPs providers engaging in systemic risk assessment and mitigation, as well as auditors, researchers, and enforcers, including the European Commission.
References
Legal acts:
The Charter: Charter of Fundamental Rights of the European Union [ 2010 ] OJ C83/389.
DSA: Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L 277/1.
DCD: Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services [2019] OJ L136/1.
Literature:
Alphabet. (2024). Form 10-Q (No. QUARTERLY REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934. For the quarterly period ended June 30, 2024). https://www.sec.gov/Archives/edgar/data/1652044/000165204424000053/goog-20240331.htm
Amnesty International. (2023). Driven into darkness: How TikTok’s ‘For You’ feed encourages self-harm and suicidal ideation. https://www.amnesty.org/en/documents/POL40/7350/2023/en/
Andreassen, C. S., Billieux, J., Griffiths, M. D., Kuss, D. J., Demetrovics, Z., Mazzoni, E., & Pallesen, S. (2016). The relationship between addictive use of social media and video games and symptoms of psychiatric disorders: A large-scale cross-sectional study. Psychology of Addictive Behaviors, 30(2), 252–262. https://doi.org/10.1037/adb0000160
Andreassen, C. S., Torsheim, T., Brunborg, G. S., & Pallesen, S. (2012). Development of a Facebook Addiction Scale. Psychological Reports, 110(2), 501–517. https://doi.org/10.2466/02.09.18.PR0.110.2.501-517
Arcelus, J., Mitchell, A. J., Wales, J., & Nielsen, S. (2011). Mortality rates in patients with anorexia nervosa and other eating disorders: A meta-analysis of 36 studies. Archives of General Psychiatry, 68(7), 724. https://doi.org/10.1001/archgenpsychiatry.2011.74
Azari, R., & Borisch, B. (2023). What is public health? A scoping review. Archives of Public Health, 81(1), 86. https://doi.org/10.1186/s13690-023-01091-6
Balkin, J. M. (2018). Fixing social media’s grand bargain (A Hoover Institution Essay No. 1814). Hoover Institution. https://www.hoover.org/research/fixing-social-medias-grand-bargain
Beknazar-Yuzbashev, G., Jiménez-Durán, R., & Stalinski, M. (2024). A model of harmful yet engaging content on social media. AEA Papers and Proceedings, 114, 678–683. https://doi.org/10.1257/pandp.20241004
Bernstein, G. (2023). Unwired: Gaining control over addictive technologies (1st edn). Cambridge University Press. https://doi.org/10.1017/9781009257954
Bhargava, V. R., & Velasquez, M. (2021). Ethics of the attention economy: The problem of social media addiction. Business Ethics Quarterly, 31(3), 321–359. https://doi.org/10.1017/beq.2020.32
Boyd, D. (2016). Untangling research and practice: What Facebook’s “emotional contagion” study teaches us. Research Ethics, 12(1), 4–13. https://doi.org/10.1177/1747016115583379
Brewer, J. (2021). Unwinding anxiety: New science shows how to break the cycles of worry and fear to heal your mind. Avery.
Broughton Micova, S., & Calef, A. (2023). Elements for effective systemic risk assessment under the DSA [CERRE report]. Centre on Regulation in Europe (CERRE). https://cerre.eu/wp-content/uploads/2023/07/CERRE-DSA-Systemic-Risk-Report.pdf
Broughton Micova, S., Schnurr, D., Calef, A., & Enstone, B. (2024). Cross-cutting issues for DSA systemic risk management: An agenda for cooperation [CERRE report]. Centre on Regulation in Europe (CERRE). https://cerre.eu/publications/cross-cutting-issues-for-dsa-systemic-risk-management-an-agenda-for-cooperation/
Bublitz, J. C. (2020). Why means matter legally relevant differences between direct and indirect interventions into other minds. In N. A. Vincent, T. Nadelhoffer, & A. McCay (Eds), Neurointerventions and the Law (1st edn, pp. 49–88). Oxford University Press. https://doi.org/10.1093/oso/9780190651145.003.0003
Bublitz, J. C., & Merkel, R. (2014). Crimes against minds: On mental manipulations, harms and a human right to mental self-determination. Criminal Law and Philosophy, 8(1), 51–77. https://doi.org/10.1007/s11572-012-9172-y
Bublitz, J.-C. (2020). The nascent right to psychological integrity and mental self-determination. In A. Von Arnauld, K. Von Der Decken, & M. Susi (Eds), The Cambridge handbook of new human rights (1st edn, pp. 387–403). Cambridge University Press. https://doi.org/10.1017/9781108676106.031
Buchanan, J. M. (1991). Opportunity cost. In J. Eatwell, M. Milgate, & P. Newman (Eds), The World of Economics (pp. 520–525). Palgrave Macmillan UK. https://doi.org/10.1007/978-1-349-21315-3_69
Busch, C., & Mak, V. (2021). Putting the Digital Services Act in context. Journal of European Consumer and Market Law, 10(3), 109–114.
Cauffman, C., & Goanta, C. (2021). A new order: The Digital Services Act and consumer protection. European Journal of Risk Regulation, 12(4), 758–774. https://doi.org/10.1017/err.2021.8
Chen, F. F., Jing, Y., Hayes, A., & Lee, J. M. (2013). Two concepts or two approaches? A bifactor analysis of psychological and subjective well-being. Journal of Happiness Studies, 14(3), 1033–1068. https://doi.org/10.1007/s10902-012-9367-x
Childress, J. F., Faden, R. R., Gaare, R. D., Gostin, L. O., Kahn, J., Bonnie, R. J., Kass, N. E., Mastroianni, A. C., Moreno, J. D., & Nieburg, P. (2002). Public health ethics: Mapping the terrain. Journal of Law, Medicine & Ethics, 30(2), 170–178. https://doi.org/10.1111/j.1748-720X.2002.tb00384.x
Chiossi, F., Haliburton, L., Ou, C., Butz, A. M., & Schmidt, A. (2023). Short-form videos degrade our capacity to retain intentions: Effect of context switching on prospective memory. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3544548.3580778
De Franceschi, A. (2022). Personal data as counter-performance. In R. Senigaglia, C. Irti, & A. Bernes (Eds), Privacy and data protection in software services (pp. 59–71). Springer Singapore. https://doi.org/10.1007/978-981-16-3049-1_6
De Gregorio, G., & Dunn, P. (2022). The European risk-based approaches: Connecting constitutional dots in the digital age. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4071437
Devlin, H. (2024, January 2). Revealed: Almost half of British teens feel addicted to social media, study says. The Guardian. https://www.theguardian.com/lifeandstyle/2024/jan/02/social-media-addiction-teenagers-study-phones
Diener, E., Oishi, S., & Tay, L. (2018). Advances in subjective well-being research. Nature Human Behaviour, 2(4), 253–260. https://doi.org/10.1038/s41562-018-0307-6
Domoff, S. E., Borgen, A. L., Rye, B., Barajas, G. R., & Avery, K. (2022). Problematic digital media use and addiction. In J. Nesi, E. H. Telzer, & M. J. Prinstein (Eds), Handbook of Adolescent Digital Media Use and Mental Health (1st edn, pp. 300–316). Cambridge University Press. https://doi.org/10.1017/9781108976237.016
Douglas, T., & Forsberg, L. (2021). Three rationales for a legal right to mental integrity. In S. Ligthart, D. Van Toor, T. Kooijmans, T. Douglas, & G. Meynen (Eds), Neurolaw (pp. 179–201). Springer International Publishing. https://doi.org/10.1007/978-3-030-69277-3_8
Esposito, F., & Maciel Cathoud Ferreira, T. (2024). Addictive design as an unfair commercial practice: The case of hyper-engaging dark patterns. European Journal of Risk Regulation, 15(4), 999–1016. https://doi.org/10.1017/err.2024.8
European Commission. (2024a). Commission opens formal proceedings against Facebook and Instagram under the Digital Services Act. https://ec.europa.eu/commission/presscorner/detail/en/ip_24_2373
European Commission. (2024b). Commission opens formal proceedings against Meta under the Digital Services Act related to the protection of minors on Facebook and Instagram. https://ec.europa.eu/commission/presscorner/detail/en/ip_24_2664
European Commission. (2024c). Commission opens formal proceedings against TikTok under the Digital Services Act. https://ec.europa.eu/commission/presscorner/detail/en/ip_24_926
European Commission. (2024d). Commission opens proceedings against TikTok under the DSA regarding the launch of TikTok Lite in France and Spain, and communicates its intention to suspend the reward programme in the EU. https://ec.europa.eu/commission/presscorner/detail/en/ip_24_2227
European Commission. (2024e). Supervision of the designated very large online platforms and search engines under DSA. https://digital-strategy.ec.europa.eu/en/policies/list-designated-vlops-and-vloses
European Parliment. (2023). European Parliament resolution of 12 December 2023 on addictive design of online services and consumer protection in the EU single market. https://www.europarl.europa.eu/doceo/document/TA-9-2023-0459_EN.html
Fardouly, J., Willburger, B. K., & Vartanian, L. R. (2018). Instagram use and young women’s body image concerns and self-objectification: Testing mediational pathways. New Media & Society, 20(4), 1380–1395. https://doi.org/10.1177/1461444817694499
Ferster, C. B., & Skinner, B. F. (1957). Schedules of reinforcement. Appleton-Century-Crofts. https://doi.org/10.1037/10627-000
Firth, J., Torous, J., Stubbs, B., Firth, J. A., Steiner, G. Z., Smith, L., Alvarez‐Jimenez, M., Gleeson, J., Vancampfort, D., Armitage, C. J., & Sarris, J. (2019). The “online brain”: How the Internet may be changing our cognition. World Psychiatry, 18(2), 119–129. https://doi.org/10.1002/wps.20617
Froese, A. D., Carpenter, C. N., Inman, D. A., Schooley, J. R., Barnes, R. B., Brecht, P. W., & Chacon, J. D. (2012). Effects of classroom cell phone use on expected and actual learning. College Student Journal, 46(2), 323–332.
Garg, N., & Lerner, J. S. (2013). Sadness and consumption. Journal of Consumer Psychology, 23(1), 106–113. https://doi.org/10.1016/j.jcps.2012.05.009
Gentile, G., & Lynskey, O. (2022). Deficient by design? The transnational enforcement of the GDPR. International and Comparative Law Quarterly, 71(4), 799–830. https://doi.org/10.1017/S0020589322000355
Griffiths, M. (2005). A ‘components’ model of addiction within a biopsychosocial framework. Journal of Substance Use, 10(4), 191–197. https://doi.org/10.1080/14659890500114359
Griffiths, M. D. (2018). Adolescent social networking: How do social media operators facilitate habitual use? Education and Health, 36(3), 66–69.
Hacker, P. (2019). Regulating the economic impact of data as counter-performance: From the illegality doctrine to the unfair contract terms directive. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3391772
Hadar, A., Hadas, I., Lazarovits, A., Alyagon, U., Eliraz, D., & Zangen, A. (2017). Answering the missed call: Initial exploration of cognitive and electrophysiological changes associated with smartphone use and abuse. PLOS ONE, 12(7), e0180094. https://doi.org/10.1371/journal.pone.0180094
Haidt, J. (2024). The anxious Generation: How the great rewiring of childhood Is causing an epidemic of mental illness. Penguin Press.
He, Q., Turel, O., & Bechara, A. (2017). Brain anatomy alterations associated with Social Networking Site (SNS) addiction. Scientific Reports, 7(1), 45064. https://doi.org/10.1038/srep45064
Henn, C. M., Hill, C., & Jorgensen, L. I. (2016). An investigation into the factor structure of the Ryff scales of psychological well-being. SA Journal of Industrial Psychology, 42(1), 12 pages. https://doi.org/10.4102/sajip.v42i1.1275
Herman, H., & Jané-Llopis, E. (2005). Mental health promotion in public health. Promotion & Education, 12(2_suppl), 42–47. https://doi.org/10.1177/10253823050120020107
Holma, K. M., Melartin, T. K., Haukka, J., Holma, I. A. K., Sokero, T. P., & Isometsä, E. T. (2010). Incidence and predictors of suicide attempts in DSM–IV Major Depressive Disorder: A five-year prospective study. American Journal of Psychiatry, 167(7), 801–808. https://doi.org/10.1176/appi.ajp.2010.09050627
Hunt, M. G., Marx, R., Lipson, C., & Young, J. (2018). No more FOMO: Limiting social media decreases loneliness and depression. Journal of Social and Clinical Psychology, 37(10), 751–768. https://doi.org/10.1521/jscp.2018.37.10.751
Husovec, M. (2024). Principles of the Digital Services Act. Oxford University Press.
Hwang, T. (2020). Subprime attention crisis: Advertising and the time bomb at the heart of the internet. FSG Originals.
Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), 5. https://doi.org/10.1186/s40504-017-0050-1
Istace, T. (2023). Protecting the mental realm: What does human rights law bring to the table? Netherlands Quarterly of Human Rights, 41(4), 214–234. https://doi.org/10.1177/09240519231211823
Kaminski, M. E. (2023). Regulating the risks of AI. Boston University Law Review, 103, 1347–1411.
Keles, B., McCrae, N., & Grealish, A. (2020). A systematic review: The influence of social media on depression, anxiety and psychological distress in adolescents. International Journal of Adolescence and Youth, 25(1), 79–93. https://doi.org/10.1080/02673843.2019.1590851
Kemp, E., Chapa, S., & Kopp, S. W. (2013). Regulating emotions in advertising: Examining the effects of sadness and anxiety on hedonic product advertisements. Journal of Current Issues & Research in Advertising, 34(1), 135–150. https://doi.org/10.1080/10641734.2013.754719
Keyes, C. L. M. (2002). The mental health continuum: From languishing to flourishing in life. Journal of Health and Social Behavior, 43(2), 207. https://doi.org/10.2307/3090197
Koessmeier, C., & Büttner, O. B. (2021). Why are we distracted by social media? Distraction situations and strategies, reasons for distraction, and individual differences. Frontiers in Psychology, 12, 711416. https://doi.org/10.3389/fpsyg.2021.711416
Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111
Lambert, J., Barnstable, G., Minter, E., Cooper, J., & McEwan, D. (2022). Taking a one-week break from social media improves well-being, depression, and anxiety: A randomized controlled trial. Cyberpsychology, Behavior, and Social Networking, 25(5), 287–293. https://doi.org/10.1089/cyber.2021.0324
Langvardt, K. (2019). Regulating habit-forming technology. Fordham Law Review, 88(1), 129–185.
Lara, R. S., & Bokoch, R. (2021). Cognitive functioning and social media: Has technology changed us? Acta Psychologica, 221, 103429. https://doi.org/10.1016/j.actpsy.2021.103429
Lavazza, A. (2018). Freedom of thought and mental integrity: The moral requirements for any neural prosthesis. Frontiers in Neuroscience, 12, 82. https://doi.org/10.3389/fnins.2018.00082
Lavazza, A., & Giorgi, R. (2023). Philosophical foundation of the right to mental integrity in the age of neurotechnologies. Neuroethics, 16(1), 10. https://doi.org/10.1007/s12152-023-09517-2
Luguri, J., & Strahilevitz, L. J. (2021). Shining a light on dark patterns. Journal of Legal Analysis, 13(1), 43–109. https://doi.org/10.1093/jla/laaa006
Mark, G. (2023). Attention span: A groundbreaking way to restore balance, happiness and productivity. Hanover Square Press.
McAneney, H., Tully, M. A., Hunter, R. F., Kouvonen, A., Veal, P., Stevenson, M., & Kee, F. (2015). Individual factors and perceived community characteristics in relation to mental health and mental well-being. BMC Public Health, 15(1), 1237. https://doi.org/10.1186/s12889-015-2590-8
McLean, S. A., Paxton, S. J., Wertheim, E. H., & Masters, J. (2015). Selfies and social media: Relationships between self-image editing and photo-investment and body dissatisfaction and dietary restraint. Journal of Eating Disorders, 3(S1), O21. https://doi.org/10.1186/2050-2974-3-S1-O21
Memon, A., Sharma, S., Mohite, S., & Jain, S. (2018). The role of online social networking on deliberate self-harm and suicidality in adolescents: A systematized review of literature. Indian Journal of Psychiatry, 60(4), 384. https://doi.org/10.4103/psychiatry.IndianJPsychiatry_414_17
Meta. (2024). Form 10-Q; Quarterly report pursuant to section 13 or 15 (d) of the securities exchange act of 1934. For the quarterly period ended June 30, 2024. https://d18rn0p25nwr6d.cloudfront.net/CIK-0001326801/861663ba-3a4b-4e3a-bdba-5ad0b7e6b2e3.pdf
Metzger, A. (2017). Data as counter-performance: What rights and duties for parties have. Journal of Intellectual Property, Information Technology and Electronic Commerce Law, 8(1), 2–8.
Meyer, R. (2014, June 28). Everything we know about Facebook’s secret mood-manipulation experiment. The Atlantic. https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/
Micklitz, H.-W., & Saumier, G. (2018). Enforcement and effectiveness of consumer law. In H.-W. Micklitz & G. Saumier (Eds), Enforcement and Effectiveness of Consumer Law (Vol. 27, pp. 3–45). Springer International Publishing. https://doi.org/10.1007/978-3-319-78431-1_1
Mik, E. (2016). The erosion of autonomy in online consumer transactions. Law, Innovation and Technology, 8(1), 1–38. https://doi.org/10.1080/17579961.2016.1161893
Montag, C., Yang, H., & Elhai, J. D. (2021). On the psychology of TikTok use: A first glimpse from empirical findings. Frontiers in Public Health, 9, 641673. https://doi.org/10.3389/fpubh.2021.641673
Morgans, J. (2017, May 17). The secret ways social media is built for addiction. Vice. https://www.vice.com/en/article/the-secret-ways-social-media-is-built-for-addiction/
Murthy, V. H. (2024). Surgeon General: Why I’m calling for a warning label on social media platforms. The New York Times. https://www.nytimes.com/2024/06/17/opinion/social-media-health-warning.html
Newman. (2020). Antitrust in attention markets: Objections and responses. Santa Clara Law Review, 59(3), 743–769.
Pałka, P. (2021). The world of fifty (interoperable) Facebooks. Seton Hall Law Review, 51(4). https://scholarship.shu.edu/shlr/vol51/iss4/5
Pałka, P. (2023). Harmed while anonymous: Beyond the personal/non-personal distinction in data governance. Technology and Regulation, 2023, 22–34. https://doi.org/10.71265/pbkj0276
Pałka, P. (2024). AI, consumers and psychological harm. In L. A. DiMatteo, C. Poncibó, & G. Howells (Eds), The Cambridge Handbook of AI and Consumer Law (1st edn, pp. 163–174). Cambridge University Press. https://doi.org/10.1017/9781009483599.018
Pałka, P. (2025). Problems of technological management: On automated extraction of mental goods. In R. Brownsword & L. A. DiMatteo (Eds), The Cambridge handbook of the governance of technology. Cambridge University Press.
Petry, N. M., Rehbein, F., Gentile, D. A., Lemmens, J. S., Rumpf, H., Mößle, T., Bischof, G., Tao, R., Fung, D. S. S., Borges, G., Auriacombe, M., González Ibáñez, A., Tam, P., & O’Brien, C. P. (2014). An international consensus for assessing internet gaming disorder using the new DSM ‐5 approach. Addiction, 109(9), 1399–1406. https://doi.org/10.1111/add.12457
Pielot, M., Church, K., & De Oliveira, R. (2014). An in-situ study of mobile phone notifications. Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services, 233–242. https://doi.org/10.1145/2628363.2628364
Pinterest. (2024). Pinterest investor relations, 10-Q.
Prince, M., Patel, V., Saxena, S., Maj, M., Maselko, J., Phillips, M. R., & Rahman, A. (2007). No health without mental health. The Lancet, 370(9590), 859–877. https://doi.org/10.1016/S0140-6736(07)61238-0
Ronkainen, T. (2023). Principles of increasing user engagement and habit formation in social network platforms: An exploratory literature review [Master’s thesis, Aalto University]. Aaltodoc. https://aaltodoc.aalto.fi/handle/123456789/122599
Rosenquist, J. N., Morton, F. M. S., & Weinstein, S. N. (2022). Addictive technology and its implications for antitrust enforcement. North Carolina Law Review, 100, 431–484.
Rowland, M. (2022). Online visual self-presentation: Augmented reality face filters, selfie-editing behaviors, and body image disorder. Journal of Research in Gender Studies, 12(1), 99. https://doi.org/10.22381/JRGS12120227
Ryan, R. M., & Deci, E. L. (2001). On happiness and human potentials: A review of research on hedonic and eudaimonic well-being. Annual Review of Psychology, 52(1), 141–166. https://doi.org/10.1146/annurev.psych.52.1.141
Ryff, C. D. (1989). Happiness is everything, or is it? Explorations on the meaning of psychological well-being. Journal of Personality and Social Psychology, 57(6), 1069–1081. https://doi.org/10.1037/0022-3514.57.6.1069
Ryff, C. D. (1995). Psychological well-being in adult life. Current Directions in Psychological Science, 4(4), 99–104. https://doi.org/10.1111/1467-8721.ep10772395
Ryff, C. D., & Keyes, C. L. M. (1995). The structure of psychological well-being revisited. Journal of Personality and Social Psychology, 69(4), 719–727. https://doi.org/10.1037/0022-3514.69.4.719
Rykard, K. S. (2020). Digital distractions: Using action research to explore students’ behaviors, motivations, and perceptions of cyberslacking in a suburban high school [Doctoral dissertation, University of South Carolina]. University Libraries. University of south Carolina]. https://www.proquest.com/docview/2428561002/abstract/16381533C23B4C04PQ/1
Santarossa, S., & Woodruff, S. J. (2017). #SocialMedia: Exploring the relationship of social networking sites on body image, self-esteem, and eating disorders. Social Media + Society, 3(2), 2056305117704407. https://doi.org/10.1177/2056305117704407
Selinger, E., & Hartzog, W. (2016). Facebook’s emotional contagion study and the ethical problem of co-opted identity in mediated environments where users lack control. Research Ethics, 12(1), 35–43. https://doi.org/10.1177/1747016115579531
Shannon, H., Bush, K., Villeneuve, P. J., Hellemans, K. G., & Guimond, S. (2022). Problematic social media use in adolescents and young adults: Systematic review and meta-analysis. JMIR Mental Health, 9(4), e33450. https://doi.org/10.2196/33450
Sharifian, N., & Zahodne, L. B. (2020). Social media bytes: Daily associations between social media use and everyday memory failures across the adult life span. The Journals of Gerontology: Series B, 75(3), 540–548. https://doi.org/10.1093/geronb/gbz005
Shensa, A., Escobar-Viera, C. G., Sidani, J. E., Bowman, N. D., Marshal, M. P., & Primack, B. A. (2017). Problematic social media use and depressive symptoms among US young adults: A nationally-representative study. Social Science & Medicine, 182, 150–157. https://doi.org/10.1016/j.socscimed.2017.03.061
Sherman, L. E., Payton, A. A., Hernandez, L. M., Greenfield, P. M., & Dapretto, M. (2016). The power of the like in adolescence: Effects of peer influence on neural and behavioral responses to social media. Psychological Science, 27(7), 1027–1035. https://doi.org/10.1177/0956797616645673
Söderlund, K., Engström, E., Haresamudram, K., Larsson, S., & Strimling, P. (2024). Regulating high-reach AI: On transparency directions in the Digital Services Act. Internet Policy Review, 13(1). https://doi.org/10.14763/2024.1.1746
Statista. (2025). Daily social media usage worldwide. https://www.statista.com/statistics/433871/daily-social-media-usage-worldwide
Surgeon General. (2023). Social media and youth mental health [The US Surgeon General’s Advisory]. https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf
Tamašiūnas, L. (2024). Staying connected under the sheets: US gadget use in bed research. NordVPN. https://nordvpn.com/pl/blog/gadget-use-in-bed-research-us/
ThinkNow. (2019). Social media influencers and privacy concerns. ThinkNow.
Tommasi, F., Ceschi, A., Du Plooy, H., Michailidis, E., & Sartori, R. (2023). The influence of workday experience on smartphones uses in commuting from work to home. Transportation Research Part F: Traffic Psychology and Behaviour, 97, 268–277. https://doi.org/10.1016/j.trf.2023.07.016
Tortorici, D. (2020). Infinite scroll: Life under Instagram. The Guardian. https://www.theguardian.com/technology/2020/jan/31/infinite-scroll-life-under-instagram
Trzaskowski, J. (2022). Data-driven value extraction and human well-being under EU law. Electronic Markets, 32(2), 447–458. https://doi.org/10.1007/s12525-022-00528-0
Turel, O., & Serenko, A. (2012). The benefits and dangers of enjoyment with social networking websites. European Journal of Information Systems, 21(5), 512–528. https://doi.org/10.1057/ejis.2012.1
Turillazzi, A., Taddeo, M., Floridi, L., & Casolari, F. (2023). The digital services act: An analysis of its ethical, legal, and social implications. Law, Innovation and Technology, 15(1), 83–106. https://doi.org/10.1080/17579961.2023.2184136
Twenge, J. M. (2023). Yes, we do know social media isn’t safe for kids [Substack newsletter]. After Babel. https://jonathanhaidt.substack.com/p/social-media-not-safe-kids
Twenge, J. M., & Hamilton, J. L. (2022). Linear correlation is insufficient as the sole measure of associations: The case of technology use and mental health. Acta Psychologica, 229, 103696. https://doi.org/10.1016/j.actpsy.2022.103696
Udupa, N. S., Twenge, J. M., McAllister, C., & Joiner, T. E. (2023). Increases in poor mental health, mental distress, and depression symptoms among US adults, 1993–2020. Journal of Mood and Anxiety Disorders, 2, 100013. https://doi.org/10.1016/j.xjmad.2023.100013
Van Den Eijnden, R. J. J. M., Lemmens, J. S., & Valkenburg, P. M. (2016). The social media disorder scale. Computers in Human Behavior, 61, 478–487. https://doi.org/10.1016/j.chb.2016.03.038
Van Dierendonck, D., Díaz, D., Rodríguez-Carvajal, R., Blanco, A., & Moreno-Jiménez, B. (2008). Ryff’s six-factor model of psychological well-being, a Spanish exploration. Social Indicators Research, 87(3), 473. https://doi.org/10.1007/s11205-007-9174-7
Van Hoboken, J., Buri, I., Quintais, J., Fahy, R., Appelman, N., & Straub, M. (2023). Putting the DSA into practice: Enforcement, access to justice, and global implications. https://doi.org/10.17176/20230208-093135-0
Vanden Abeele, M. M. P., Hendrickson, A. T., Pollmann, M. M. H., & Ling, R. (2019). Phubbing behavior in conversations and its relation to perceived conversation intimacy and distraction: An exploratory observation study. Computers in Human Behavior, 100, 35–47. https://doi.org/10.1016/j.chb.2019.06.004
Vannucci, A., Flannery, K. M., & Ohannessian, C. M. (2017). Social media use and anxiety in emerging adults. Journal of Affective Disorders, 207, 163–166. https://doi.org/10.1016/j.jad.2016.08.040
Vogels, E., Gelles-Watnick, R., & Massarat, N. (2022). Teens, social media and technology 2022. Pew Research Center. https://www.pewresearch.org/wp-content/uploads/sites/20/2022/08/PI_2022.08.10_Teens-and-Tech_FINAL.pdf
Wahlbeck, K. (2015). Public mental health: The time is ripe for translation of evidence into practice. World Psychiatry, 14(1), 36–42. https://doi.org/10.1002/wps.20178
Wang, X., Xie, X., Wang, Y., Wang, P., & Lei, L. (2017). Partner phubbing and depression among married Chinese adults: The roles of relationship satisfaction and relationship length. Personality and Individual Differences, 110, 12–17. https://doi.org/10.1016/j.paid.2017.01.014
Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain drain: The mere presence of one’s own smartphone reduces available cognitive capacity. Journal of the Association for Consumer Research, 2(2), 140–154. https://doi.org/10.1086/691462
Weich, S., Brugha, T., King, M., McManus, S., Bebbington, P., Jenkins, R., Cooper, C., McBride, O., & Stewart-Brown, S. (2011). Mental well-being and mental illness: Findings from the adult psychiatric morbidity survey for England 2007. British Journal of Psychiatry, 199(1), 23–28. https://doi.org/10.1192/bjp.bp.111.091496
Wells, G., Horwitz, J., & Seetharaman, D. (2021). Facebook knows Instagram is toxic for teen girls, company documents show. Wall Street Journal. https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739
Williams, J. (2018). Stand out of our light: Freedom and resistance in the attention economy. Cambridge University Press.
Wolfers, L. N., & Utz, S. (2022). Social media use, stress, and coping. Current Opinion in Psychology, 45, 101305. https://doi.org/10.1016/j.copsyc.2022.101305
Woods, H. C., & Scott, H. (2016). #Sleepyteens: Social media use in adolescence is associated with poor sleep quality, anxiety, depression and low self‐esteem. Journal of Adolescence, 51(1), 41–49. https://doi.org/10.1016/j.adolescence.2016.05.008
World Health Organization. (2021). Health promotion glossary of terms 2021. https://iris.who.int/bitstream/handle/10665/350161/9789240038349-eng.pdf?sequence=1
World Health Organization. (2022a). ICD-11: International Classification of Diseases. https://icd.who.int/
World Health Organization. (2022b, June 17). Mental health. https://www.who.int/health-topics/mental-health#tab=tab_1
Wu, T. (2016). The attention merchants: The epic scramble to get inside our heads. Faculty Books. https://scholarship.law.columbia.edu/books/64
Wu, T. (2019). Blind spot: The attention economy and the law. Antitrust Law Journal, 82(3), 771–806. https://scholarship.law.columbia.edu/faculty_scholarship/2029
Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. https://doi.org/10.1080/1369118X.2016.1186713
Yu, L., Cao, X., Liu, Z., & Wang, J. (2018). Excessive social media use at work: Exploring the effects of social media overload on job performance. Information Technology & People, 31(6), 1091–1112. https://doi.org/10.1108/ITP-10-2016-0237
Zakon, A. (2019). Optimized for addiction: Extending product liability concepts to defectively designed social media algorithms and overcoming the Communications Decency Act. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3682048