Guns, Incels, and Algorithms: Where We Are on Managing Terrorist and Violent Extremist Content Online
Ten years ago, U.S. national security agencies grew concerned about a relatively new and powerful weapon used by terrorists: the World Wide Web. What had begun as an effort to connect end users from across the world to share information and to serve as a force of human liberation, instead began to b...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Ten years ago, U.S. national security agencies grew concerned about a relatively new and powerful weapon used by terrorists: the World Wide Web. What had begun as an effort to connect end users from across the world to share information and to serve as a force of human liberation, instead began to be used as a tool for destruction of life. Terrorists were exploiting technology companies’ lax content moderation policies to recruit new members, spread violent extremist ideology, and plan terrorist attacks. In 2012, Twitter’s General Manager declared the firm “the free speech wing of the Free Speech Party,” and large U.S. technology companies were broadly reticent to make changes to their content moderation policies in the early days of their development.
By 2015, a gargantuan effort to eliminate ISIS commenced – mostly driven by the U.S. government – culminating in U.S. Cyber Command’s Operation GLOWING SYMPHONY, led by General Paul Nakasone, which reportedly foiled the majority of ISIS’ online presence and networks in 2016. Technology companies became much stricter about terrorist content online, but the problem of identifying and removing such content persisted.
Today, the online terrorism landscape looks much different to a decade ago. White supremacist and “incel” (involuntary celibate) violent extremist content litters the Web. Terrorist attacks are frequently committed by hate-fuelled lone-wolf “internet warriors” who have been inspired by non-Islamic terrorist and violent extremist content and radicalizing material online. Yet, technology companies and governments have not managed to keep pace with the dynamic threat.
This is not to say that they haven’t tried. In 2019, a terrorist attack committed (and live-streamed, going viral) by an “online warrior” white supremacist at two mosques in Christchurch, New Zealand, galvanized technology companies and governments to do more to combat terrorist content beyond just Islamic terrorism, culminating in an ambitious multilateral initiative, The Christchurch Call to Eliminate Terrorist and Violent Extremist Content Online, an unprecedented diplomatic achievement and step forward in managing the problem.
Technology companies and governments have spent the past decade trying to better address the evolving threat of terrorist and violent extremist content online (TVEC). However, there are few studies examining just how effective these efforts have been, where we are today in managing the problem, and wherein lie |
---|