TikTok ban in the United States: A necessary precaution or a misstep?
Introduction
The debate over TikTok’s place in American society now extends far beyond data privacy alone, morphing into a sweeping inquiry into data sovereignty, national security, and broader questions of digital governance. With some 170 million users in the United States, the platform stands at the crossroads of technological ingenuity and geopolitical unease (Dedezade, 2025). Indeed, in a digital landscape shaped as much by trending hashtags as by trade tariffs, TikTok has become an emblem of the tensions between open internet ideals and national security imperatives. Most recently, the Biden administration’s call for ByteDance – TikTok’s China-based parent firm – to divest its US operations underscores the enduring friction between efforts to safeguard critical data and the desire to maintain accessible digital platforms. This friction materialised visibly on January 19, when TikTok became inaccessible to many American users for roughly 12 hours in response to additional federal restrictions, leaving existing users unable to load videos and new users unable to install the app at all. Against this backdrop, the question emerges: are such measures, including outright bans, prudent policy instruments, or overreactions to deeper systemic anxieties? This essay scrutinises the justifications for these crackdowns and explores whether a more nuanced approach might reconcile legitimate security concerns with the democratic values underpinning an open internet.
The geopolitical context of data sovereignty
At the heart of this controversy lies a collision of regulatory paradigms: the United States’ historically laissez-faire approach to digital markets and China’s state-centric model, where nominally private firms often function under the tacit – or explicit – auspices of the state. China’s cybersecurity laws empower its government to compel data access, heightening concerns that TikTok could act as a conduit for state surveillance or influence operations. Yet the evidence marshaled against TikTok remains decidedly mixed. BuzzFeed reports (Baker-White, 2022) allude to ByteDance employees in China accessing US user data, even as TikTok’s leadership categorically denies systemic data transfers to Beijing (Chew & TikTok Inc., 2023). Meanwhile, University of Toronto researchers found no outright proof of data being handed over to Chinese authorities, though they conceded that their investigative reach stopped at TikTok’s own servers (Silberling, 2023). Such epistemic ambiguities epitomise the broader problem of assessing security threats in a world bereft of robust, enforceable global data governance. Without transparent oversight mechanisms or consistent international standards, one party’s “smoking gun” remains another’s “overheated conjecture”, underscoring the inherent challenges of adjudicating risk in a digitally interconnected yet politically fragmented landscape.
Algorithmic influence: a question of control
Beyond concerns about data privacy, worries over TikTok’s potential for subtle ideological manipulation raise deeper questions about platform influence and accountability. National security officials fear that the app’s recommendation algorithm - an opaque, proprietary system - could quietly promote certain narratives or suppress opposing viewpoints. Such manipulation, if real, might leverage TikTok’s cultural reach to guide public discourse, overshadowing some perspectives in favour of others. Yet these anxieties beg a larger question: how unique is TikTok’s capacity for shaping user perceptions when nearly every major platform – Facebook, YouTube, X – relies on similarly mysterious algorithms that can amplify misinformation and fuel political polarisation (Internet Governance Project, 2023)? Addressing TikTok in isolation risks overlooking broader structural issues, as many social media ecosystems use comparable “black box” curation methods that blur the boundaries between harmless personalisation and manipulative steering. In light of this, a more fundamental debate emerges: how should societies manage the power of algorithms? Striking a balance that preserves user autonomy, upholds democratic values, and safeguards national security requires more than piecemeal regulation of individual platforms; it demands a critical rethinking of our broader digital governance frameworks.
Evaluating policy responses: bans and beyond
Among the policy options being considered – ranging from a forced divestiture to a complete ban – each requires close examination through both legal and practical lenses (Congressional Research Service, 2023). A ban would likely encounter strong First Amendment challenges, given the courts’ historical reluctance to uphold government restrictions on information access, even in the name of national security. Moreover, enforcing a nationwide ban poses logistical obstacles, as users can turn to virtual private networks (VPNs) to circumvent any blocks. The Washington Post reports a marked uptick in VPN usage, though its own tests on various iPhones and Android devices managed only fleeting access to TikTok and NBC News similarly failed to restore functionality through two VPN trials on January 20 (Fowler et al., 2025). Even so, some users claim success by employing a web-browser VPN to appear as though they’re accessing the app from another country – an indication that technical workarounds persist, albeit inconsistently (Dorn, 2025).
Furthermore, a forced divestiture may appear more measured than a blanket ban, yet it carries the risk of retaliatory actions from China, potentially escalating an already delicate US-China trade standoff. Beyond bilateral tensions, this strategy raises concerns about precedent: Would similar constraints be imposed on, say, European or Southeast Asian tech firms under analogous circumstances? In a deeply interconnected global economy, such steps could be construed as veering into protectionism or even de facto economic warfare, where each nation polices foreign investments in the name of security. Moreover, this approach could undermine broader principles of free trade and undercut the United States’ standing as a predictable investment environment. The policy dilemma, therefore, is not simply about blocking or forcing the sale of a particular app – it touches on whether (and how) democracies can address legitimate security fears without recasting global commerce as an arms race in which every foreign entity is viewed with suspicion. Finding this equilibrium becomes a litmus test for whether national security imperatives can coexist with the ideals of market openness and international collaboration.
Rethinking the broader problem: data governance
The singular focus on TikTok risks obscuring a more systemic issue: the inadequacy of US data protection frameworks. In contrast, the European Union (EU) enforces robust standards through the General Data Protection Regulation (GDPR), alongside newer regulations like the Digital Services Act (DSA) and Digital Markets Act (DMA), creating a cohesive system that governs data collection, processing, and platform accountability. By comparison, the United States has no overarching federal privacy legislation, leaving it largely reliant on sector-specific rules and patchwork state laws. This regulatory gap has allowed both domestic and foreign tech companies to engage in relatively unchecked data harvesting, culminating in vulnerabilities that go well beyond TikTok’s ownership structure. Proposals such as the American Data Privacy and Protection Act aim to introduce unified standards for data handling (House of Representatives, 2021), potentially aligning US practices more closely with the EU’s comprehensive approach. Enacting such measures would mitigate risks across the broader ecosystem of data-driven platforms - without resorting to digital protectionism or infringing on free expression – and could also foster greater transatlantic cooperation on global digital governance.
Theoretical implications: sovereignty vs. openness
The TikTok debate exemplifies an ongoing clash between two grand visions of the internet: one in which states assert tight jurisdictional control over data and platforms (“digital sovereignty”) and another rooted in the liberal ideal of a globally interconnected “cyberspace” largely unfettered by national boundaries. Scholars like Lawrence Lessig (2006) and Manuel Castells (2009) have long highlighted how technology does not evolve in a vacuum; rather, it is shaped by institutional and regulatory frameworks as much as by code or market forces. If the United States imposes strict bans or divestiture requirements, it risks aligning itself with a “splinternet” model (Malcomson, 2016) reminiscent of China’s “Great Firewall” (Ming, 2017; Roberts, 2018), thereby undermining its professed commitment to digital openness. Yet a laissez-faire approach, by leaving regulation to market mechanisms, often fails to safeguard users against the very power asymmetries and data vulnerabilities that gave rise to the current debate.
Navigating between these extremes demands a more nuanced theoretical stance, one that acknowledges the co-constitution of technology and policy. The concept of a “middle ground” is not merely about adopting intermediate regulatory measures – it reflects a broader recognition that, in a global digital arena, states must balance national interests with transnational norms of transparency and accountability. Mechanisms such as algorithmic audits, reciprocal data localisation agreements, and robust disclosure mandates for all platforms – regardless of country of origin – represent attempts to weld together sovereignty concerns with the liberal principle of an open internet (Hajdu & Woolley, 2021). In doing so, such policies do more than address a single platform’s perceived threats; they challenge the notion that technological architecture and global governance can be effectively separated. By pursuing a framework of shared standards and reciprocal enforcement, policymakers inch closer to an internet that remains expansive and innovative while avoiding the pitfalls of unbridled market power or insular nationalist firewalls.
Conclusion: toward a nuanced digital policy
TikTok’s future in the United States illustrates the challenges of governing digital platforms within a global system where national security, economic interests, and individual freedoms interact unpredictably. While bans and divestitures may seem like swift solutions, they risk sidestepping the deeper structural issues – ranging from algorithmic transparency to cross-border data flows – that truly define modern digital governance. For European policymakers and stakeholders, this debate highlights the importance of integrating US approaches with Europe’s more robust regulatory frameworks, such as the GDPR, DSA and DMA. By aligning a US drive for national security with established EU commitments to data sovereignty and platform accountability, both parties could forge a united front against unchecked corporate power and escalating digital protectionism.
In practical terms, this might involve transatlantic dialogues that encourage algorithmic audits, mutual data localisation agreements, and enforcement mechanisms ensuring that platforms – regardless of their country of origin – remain transparent and accountable. Such cooperation goes beyond safeguarding individual user rights; it preserves an open internet that respects democratic values and fosters economic innovation. Ultimately, harmonising US and EU strategies would reduce legal fragmentation and set a powerful global precedent, demonstrating that security, market openness, and the protection of civil liberties need not be at odds in the digital era.