Fake views removal and popularity on YouTube

This paper analyses how YouTube authenticates engagement metrics and, more specifically, how the platform corrects view counts by removing “fake views” (i.e., views considered artificial or illegitimate by the platform). Working with one and a half years of data extracted from a thousand French YouT...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports 2024-07, Vol.14 (1), p.15443-13, Article 15443
Hauptverfasser: Castaldo, Maria, Frasca, Paolo, Venturini, Tommaso, Gargiulo, Floriana
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper analyses how YouTube authenticates engagement metrics and, more specifically, how the platform corrects view counts by removing “fake views” (i.e., views considered artificial or illegitimate by the platform). Working with one and a half years of data extracted from a thousand French YouTube channels, we show the massive extent of the corrections done by YouTube, which concern the large majority of the channels and over 78% of the videos in our corpus. Our analysis shows that corrections are not done continuously as videos collect new views, but instead occur in batches, generally around 5 p.m. every day. More significantly, most corrections occur relatively late in the life of the videos, after they have reached most of their audience, and the delay in correction is not independent of the final popularity of videos: videos corrected later in their life are more popular on average than those corrected earlier. We discuss the probable causes of this phenomenon and its possible negative consequences on content diffusion. By inflating view counts, fake views could make videos appear more popular than they are and unwarrantedly encourage their recommendation, thus potentially altering the public debate on the platform. This could have implications on the spread of online misinformation, but their in-depth exploration requires first-hand information on view corrections, which YouTube does not provide through its API. This paper presents a series of experimental techniques to work around this limitation, offering a practical contribution to the study of online attention cycles (as described in the “Data and methods” section). At the same time, this paper is also a call for greater transparency by YouTube and other online platforms about information with crucial implications for the quality of online debate.
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-63649-w