Trump’s Visa Ban on Content Moderators Aids America’s Adversaries
December 12, 2025
At a moment when it’s easier than ever for hostile foreign governments like Russia, Iran, and China to flood social media platforms with propaganda and disinformation, the Trump administration is choosing to make their jobs easier.
Earlier in December, the State Department instructed its staff to start rejecting H1-B visa applications from individuals who had worked on fact-checking, content moderation, online safety, and other activities deemed by the Trump administration to involve the “censoring” of Americans’ free speech. The internal cable, first reported by Reuters, ordered all consular offices to review resumes and LinkedIn profiles to see if applicants had worked in these areas and, if they had, “pursue a finding that this applicant is ineligible.”
But this latest directive won’t help speed up the free exchange of ideas online. Instead, it will only serve to undercut the critical work of platforms’ Trust & Safety departments, including the teams responsible for detecting and assessing foreign influence campaigns targeting American audiences.
The order is incredibly broad, not just applying to those working against online disinformation, but any type of moderation whatsoever—from compliance to fact-checking to online safety. This work is critical to the smooth functioning of the internet as we know it. Every single type of website imaginable, even infamously controversial platforms like 4chan, use and/or employ moderators to ensure they can reliably interact with users without the risk of spam, fraud, or other types of graphic content.
“Content moderation isn’t censorship,” Alexios Mantzarlis noted in Indicator. “It’s the imperfect and tech-mediated price we pay to co-exist online.”
Concerns about over-moderation and protection of free speech need to be taken seriously. But the state-backed disinformation campaigns that take advantage of unmoderated spaces aren’t another legitimate viewpoint in the market place of ideas, they’re part of a coordinated effort by hostile governments to destabilize our democracy.
These campaigns include the Russian influence operation known as Doppelganger, which masquerades as legitimate Western news outlets to launder Kremlin-friendly narratives, or Chinese state-linked bot networks that amplify pro-PRC narratives about Taiwan. Increasingly these actors are also being supplemented by individuals who have realized that they can monetize disinformation on platforms where moderation is weak.
As 404 Media reported in late November, when a new “account location” feature was introduced on X, it quickly made it apparent that huge, viral accounts that specialize in making divisive, spammy content are often run from countries like Russia, Vietnam, and Bangladesh, motivated mainly by social media monetization schemes, set up by the platforms themselves.
The memo also ignores the very basic fact that the internet operates across languages and platforms, a set up that criminal networks exploit by hosting content in one country, scamming victims in another, and laundering the profits in a third— underscoring the importance of taking a global approach to tackling it.
Take for example the “Pig-Butchering” scam, in which fraudsters gain the trust of a victim with promises of wealth before encouraging them to invest in fake cryptocurrency assets. As the moderation company TrustLab has noted, the individuals taking part in such scams are often based in criminal compounds in South East Asia, where they use global platforms like WhatsApp and Telegram to target users in the West, and store their profits in third-party states. Online childhood sexual exploitation is similarly globalized. Over 32 million of the reports that the National Center for Missing and Exploited Children (NCMEC) received in 2022 via its CyberTipline, 93% were linked to countries outside the United States.
If criminals like those in the above examples are operating outside of national borders, then platforms’ global safety teams need to be able to move with the same flexibility, which the State Department’s memo completely undercuts. This is further exacerbated by the fact that language and cultural expertise are vital assets to help catch malign activity early. This was seen on Facebook in 2018, when the company admitted that a lack of Burmese-language expertise helped hateful content against the Rohingya Muslim population to proliferate.
Kneecapping content moderation efforts is therefore not defending American’s right to free speech. Instead, it’s making it harder for platforms to protect users from harm—whether it’s manipulated political discourse, financial fraud, or the general erosion of trust.
It’s important to note that a near-decade-long effort to combat misinformation has been far from a unified success. There were high-profile failures (such as the Covid Lab Leak theory, which went from a conspiracy theory moderated away by platforms to a theory deemed credible by U.S. Intelligence communities). There have also been consistent difficulties in analyzing the efficacy of misinformation campaigns, and difficulties in coming up with effective countermeasures. This latter problem was noted in a May 2025 report by the Swedish Psychological Defense Agency examining Doppelganger. In the report, the Agency noted that efforts to counter the operation (via fact-checking, debunking, and denouncement) were all treated as “successes” by the Russian companies in charge of the operation. But acknowledging these imperfections does not justify dismantling counter-disinformation teams in their entirety, along with other types of Trust & Safety workers. If anything, making it harder for global experts in digital scams, influence operations, and child-exploitation networks to work in the United States makes it harder for Americans to participate freely online. If not addressed, this will simply create a dynamic in which honest users retreat from platforms and malign actors thrive. The State Department’s memo manufactures a safety problem which leaves America’s tech companies less able to handle the very real problems facing their platforms.
But acknowledging these imperfections does not justify dismantling counter-disinformation teams in their entirety, along with other types of Trust & Safety workers. If anything, making it harder for global experts in digital scams, influence operations, and child-exploitation networks to work in the United States makes it harder for Americans to participate freely online. If not addressed, this will simply create a dynamic in which honest users retreat from platforms and malign actors thrive. The State Department’s memo manufactures a safety problem which leaves America’s tech companies less able to handle the very real problems facing their platforms.
Technology & Democracy


