When Online Hate Turns Lethal

May 27, 2025
On the evening of May 21, a man from Chicago approached a young couple outside the Capital Jewish Museum in Washington, D.C., and fatally shot them. The victims — both junior staffers at the Israeli Embassy — were about to be engaged. Moments after the attack, the perpetrator was captured on video shouting “Free, free Palestine.” A manifesto he later posted on the platform X, titled “Escalate for Gaza, Bring the War Home,” offered chilling insight into his motives. Citing alleged atrocities by the Israeli military, he framed his actions as a form of justified “armed action.”
This was not a random or isolated act of violence. It was the latest horrifying consequence of a radicalization pipeline that begins online — one that increasingly normalizes and promotes political violence against US persons and institutions.
For the past months, I have been investigating the links between digital extremism and real-world violence. Working with open-source intelligence analysts at Tech Against Terrorism, an independent online counter-terrorism organization, I have reviewed mounting evidence of threats, intimidation, and incitement circulating across the ideological spectrum. The findings point to a disturbing trend: extremist rhetoric online is growing not just more common, but more aggressive, more direct — and more likely to inspire action.
The discourse most relevant to this attack stems from the far-left, where anti-state and anti-authority sentiments have increasingly merged with broader pro-Palestinian narratives. Operating in plain sight on mainstream platforms such as X, Telegram, and Instagram, far-left actors have escalated their rhetoric — calling for acts of vandalism targeting law enforcement, corporations, and universities; doxxing individuals; and issuing threats of further violence against those perceived as supporting Israel.
Just as extremist rhetoric can drive offline violence, incidents of real-world violence often provoke surges of online radicalization. Following the D.C. murders, Tech Against Terrorism analysts observed a marked spike in violent rhetoric directed at Jewish communities. On far-left channels, analysts recorded high engagement with content praising the attacker, hailing him as a political prisoner and ideological martyr, and encouraging others to adopt similarly violent tactics. For example, a post made to X by an account called Bronx Anti-War stated: “We need more Elias Rodriguez in this world,” receiving 118k views, 189 likes, 369 comments, and 117 reshares. A post to Telegram by the Unity of Fields channel, meanwhile, quoted an open letter from Panther 21 stating: “We desperately need more revolutionists who are completely willing and ready at all times to KILL to change conditions.”
The attack also drew toxic responses from far-right extremists. Some celebrated the murders and echoed antisemitic tropes, further demonstrating that violent ideologies — despite their differing justifications — often converge in their outcomes and targets. A post on the imageboard 4chan asked what jail the perpetrator was being detained in “so we can free him,” and a response stated: “Let’s just be happy kikes died. Today was a good day after all.” On Telegram, a post made to the far-right Memewaffen channel lamented that ANTIFA had been more successful than “American Nazis” in damaging “the image of Jews.” The post received 1985 views, 80 reactions, and 30 comments.
What comes next? According to Tech Against Terrorism, “the volume and intensity of online confrontations following the incident has resulted in elevated online discourse, which increases the likelihood of offline violence.” In the coming months, I will continue analyzing these trends — tracking early warning signs, escalation dynamics, and patterns of platform migration. These findings will inform a forthcoming report from our Center that will not only map the evolving threat landscape but also offers concrete recommendations for tech platforms and policymakers to intervene more effectively and, ultimately, help prevent further loss of life.
Tech Against Terrorism contributed research support to this article, including analysis of online extremist discourse and open-source intelligence. The views expressed are those of the author and do not necessarily reflect the position or endorsement of Tech Against Terrorism. TAT’s role is limited to evidence-based monitoring of terrorist use of the internet and does not imply attribution of criminal intent or collective responsibility to any individual, group, or platform.