Extremist groups are seizing on high-profile political violence on social platforms to recruit supporters, justify their ideologies and spur further attacks, according to a new report from New York University’s Stern Center for Business and Human Rights.
Researchers monitored social media activity for several months this year — initially between 24 March and 6 June, with additional collection after the assassination of Charlie Kirk — to track how different extremist currents reacted and adapted after violent incidents. The review covered actors across the political spectrum, including far-right, far-left, violent Islamist and so-called nihilistic violent extremists.
The study finds these groups systematically exploit “trigger events” — visible incidents of violence — to amplify messages, attract new followers and create a feedback loop that normalizes or glorifies further violence. Material tied to attacks (memes, manifestos, symbolic references such as markings on casings) is picked up and repurposed into propaganda that spreads quickly online.
Researchers say the threat landscape is becoming more volatile and harder to categorize. NYU senior research scientist Luke Barnes warned of bespoke, narrowly tailored ideologies that don’t fit traditional left–right labels and that increasingly emphasize performative shock value. The FBI’s label “nihilistic violent extremists” reflects one trend: attackers motivated less by coherent political programs than by a desire for spectacle and notoriety.
These nihilistic actors are particularly difficult to monitor because they often communicate on semi-private platforms and draw on meme culture and in-group references to celebrate or imitate past attackers. The report cites cases where shooters and other attackers referenced or glorified earlier violent acts, spreading content that extremist communities then promote as general glorification of violence.
Extremist networks also use a two-stage online strategy: they post more moderated, mainstream-facing content on public platforms (for example, X) to reach wider audiences, then funnel interested people to semi-private or private channels where messages become more extreme and plans for coordination can be discussed. Mariana Olaizola Rosenblat, a co-author and policy adviser on technology and law at NYU Stern, explains that groups often adjust tone to appear acceptable in mainstream spaces while including links out to darker corners of the web.
The report places these online dynamics in the context of rising political violence in the United States. Data from the University of Maryland’s Study of Terrorism and Responses to Terrorism show more than 520 plots and acts of terrorism and targeted violence in the first half of 2025, resulting in 96 deaths and 329 injuries — nearly a 40% increase over the same period in 2024. Recent high-profile attacks have targeted a variety of sites and communities, and competing political narratives have emerged that downplay some threats or assign blame to specific sides of the political spectrum.
To curb online recruitment and escalation, the NYU team recommends clearer platform policies on threats and incitement, faster enforcement and user reporting mechanisms, and legislative standards defining how platforms and law enforcement should cooperate while respecting legal limits. The authors also note that the rise of nihilistic, non-ideological violence presents an unusual opportunity for bipartisan agreement on interventions.
Overall, the report warns that online ecosystems are accelerating the spread and normalization of politically motivated violence, and that stronger, coordinated responses from platforms, policymakers and civil society are needed to interrupt the cycle.

