“I agree with you, bot!” How users (dis)engage with social bots on Twitter

This article investigates under which conditions users on Twitter engage with or react to social bots. Based on insights from human–computer interaction and motivated reasoning, we hypothesize that (1) users are more likely to engage with human-like social bot accounts and (2) users are more likely...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:New media & society 2024-03, Vol.26 (3), p.1505-1526
Hauptverfasser: Wischnewski, Magdalena, Ngo, Thao, Bernemann, Rebecca, Jansen, Martin, Krämer, Nicole
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This article investigates under which conditions users on Twitter engage with or react to social bots. Based on insights from human–computer interaction and motivated reasoning, we hypothesize that (1) users are more likely to engage with human-like social bot accounts and (2) users are more likely to engage with social bots which promote content congruent to the user’s partisanship. In a preregistered 3 × 2 within-subject experiment, we asked N = 223 US Americans to indicate whether they would engage with or react to different Twitter accounts. Accounts systematically varied in their displayed humanness (low humanness, medium humanness, and high humanness) and partisanship (congruent and incongruent). In line with our hypotheses, we found that the more human-like accounts are, the greater is the likelihood that users would engage with or react to them. However, this was only true for accounts that shared the same partisanship as the user.
ISSN:1461-4448
1461-7315
DOI:10.1177/14614448211072307