TikTok in the Crosshairs: The Fine Line Between Regulation and Overreach

AdobeStock_504796688_Editorial_Use_Only (1)
December 9, 2024

TikTok, the popular Chinese-owned video-sharing platform, has come under more intense fire. The European Commission has sent the company multiple “urgent” requests for information about its alleged role in facilitating Russian interference into the Romanian presidential elections, foreshadowing potential legal action under the region’s Digital Services Act.

Meanwhile, on December 6, a U.S. federal appeals court denied TikTok’s attempt to overturn the law that requires TikTok’s parent, ByteDance, to sell the platform to an American owner by January 19, 2025, or face a ban across the country.

While TikTok’s outsized influence on public opinion and democratic processes warrants consternation and scrutiny, policymakers should resist using blunt tools without taking into account the possible negative human rights impacts of those actions and considering viable alternatives.

The unfolding situation in Romania appears to suggest that TikTok was used to organize and deploy an aggressive covert propaganda campaign in favor of the far-right populist candidate, Calin Georgescu. Georgescu, who is notoriously pro-Kremlin and anti-West, was virtually unknown in the political arena before he managed to amass the plurality (23%) of the vote in the first round of presidential elections. Investigations after this surprise result reportedly revealed that Russia orchestrated a concerted campaign on social media in favor of Georgescu. The likelihood that this campaign sealed Georgescu’s victory culminated in the country’s Constitutional Court annulling the election results altogether.

On the other side of the Atlantic, the US law forcing TikTok’s divestment from its Chinese owner, ByteDance, was motivated by the credible allegation that the app’s algorithms can be manipulated by the Chinese Communist Party to disseminate propaganda and skew public opinion contrary to US national security interests. In denying TikTok’s petition to overturn the law, the US Court of Appeals for the District of Columbia Circuit affirmed that the law is a legitimate measure to protect national security.

TikTok is likely to appeal this decision to the US Supreme Court, which could issue a stay of the lower court’s ruling and prevent a sale by January 19, after which the newly elected President Trump could engineer a way to undermine the law. But, even with these limited escape routes, TikTok must be feeling considerable pressure.

Given these unfolding situations, it is worth continuing to probe whether these actions constitute political overreach and censorship in contravention of constitutional and human rights principles. The courts in both countries have judged that TikTok poses national security and election integrity threats that are serious enough to warrant taking decisive action. But they should also consider whether these actions are necessary and proportionate under Article 19 of the International Covenant on Civil and Political Rights (ICCPR). This provision requires demonstrating that there are no alternative, less restrictive ways to achieve the legitimate governmental objectives. After all, such decisions also have serious implications for the freedoms of expression and access to information (in the US case) and democratic decision-making (in the Romanian one).

It is understandable for governments and courts to say “enough is enough” after years of impunity by tech platforms. But they should carefully evaluate their response in accordance with human rights law to avoid making a bad situation worse. In the case of TikTok’s alleged threats to national security, the US government could pass robust legislation compelling algorithmic transparency — not just from TikTok but from all online platforms. After all, major foreign influence operations have implicated several other platforms, including Facebook and X (then Twitter). Key algorithmic disclosures, coupled with greater access to platform data by researchers, would expose any intended or unintended biases in the recommender systems and help to inform more targeted interventions. Such a measure would avoid unnecessarily curtailing the freedom of expression while securing much-needed transparency from technology companies writ large.

Related

See all