SciTweets -- A Dataset and Annotation Framework for Detecting Scientific Online Discourse

Scientific topics, claims and resources are increasingly debated as part of online discourse, where prominent examples include discourse related to COVID-19 or climate change. This has led to both significant societal impact and increased interest in scientific online discourse from various discipli...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Hafid, Salim, Schellhammer, Sebastian, Bringay, Sandra, Todorov, Konstantin, Dietze, Stefan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Hafid, Salim
Schellhammer, Sebastian
Bringay, Sandra
Todorov, Konstantin
Dietze, Stefan
description Scientific topics, claims and resources are increasingly debated as part of online discourse, where prominent examples include discourse related to COVID-19 or climate change. This has led to both significant societal impact and increased interest in scientific online discourse from various disciplines. For instance, communication studies aim at a deeper understanding of biases, quality or spreading pattern of scientific information whereas computational methods have been proposed to extract, classify or verify scientific claims using NLP and IR techniques. However, research across disciplines currently suffers from both a lack of robust definitions of the various forms of science-relatedness as well as appropriate ground truth data for distinguishing them. In this work, we contribute (a) an annotation framework and corresponding definitions for different forms of scientific relatedness of online discourse in Tweets, (b) an expert-annotated dataset of 1261 tweets obtained through our labeling framework reaching an average Fleiss Kappa $\kappa$ of 0.63, (c) a multi-label classifier trained on our data able to detect science-relatedness with 89% F1 and also able to detect distinct forms of scientific knowledge (claims, references). With this work we aim to lay the foundation for developing and evaluating robust methods for analysing science as part of large-scale online discourse.
doi_str_mv 10.48550/arxiv.2206.07360
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_07360</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_07360</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-e9b50106fcaf33f823674122155ee21ac7ff89fb0119b0322118c91e04197a433</originalsourceid><addsrcrecordid>eNotz71OwzAUBWAvDKjwAEzcF0i4tuP8jFFDAalSB7IwRTfmurLaOsgxFN6eUjqd4egc6RPiTmJe1MbgA8Vv_5UrhWWOlS7xWry9Wt8fmdMMWQYtdJRo5gQU3qENYUqU_BRgFenAxynuwE0ROk5skw9bOK05JO-8hU3Y-8DQ-dlOn3HmG3HlaD_z7SUXol899svnbL15elm264zKCjNuRoMSS2fJae1qpcuqkEpJY5iVJFs5VzduRCmbEfWpkLVtJGMhm4oKrRfi_v_2bBs-oj9Q_Bn-jMPZqH8Bf65LGw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SciTweets -- A Dataset and Annotation Framework for Detecting Scientific Online Discourse</title><source>arXiv.org</source><creator>Hafid, Salim ; Schellhammer, Sebastian ; Bringay, Sandra ; Todorov, Konstantin ; Dietze, Stefan</creator><creatorcontrib>Hafid, Salim ; Schellhammer, Sebastian ; Bringay, Sandra ; Todorov, Konstantin ; Dietze, Stefan</creatorcontrib><description>Scientific topics, claims and resources are increasingly debated as part of online discourse, where prominent examples include discourse related to COVID-19 or climate change. This has led to both significant societal impact and increased interest in scientific online discourse from various disciplines. For instance, communication studies aim at a deeper understanding of biases, quality or spreading pattern of scientific information whereas computational methods have been proposed to extract, classify or verify scientific claims using NLP and IR techniques. However, research across disciplines currently suffers from both a lack of robust definitions of the various forms of science-relatedness as well as appropriate ground truth data for distinguishing them. In this work, we contribute (a) an annotation framework and corresponding definitions for different forms of scientific relatedness of online discourse in Tweets, (b) an expert-annotated dataset of 1261 tweets obtained through our labeling framework reaching an average Fleiss Kappa $\kappa$ of 0.63, (c) a multi-label classifier trained on our data able to detect science-relatedness with 89% F1 and also able to detect distinct forms of scientific knowledge (claims, references). With this work we aim to lay the foundation for developing and evaluating robust methods for analysing science as part of large-scale online discourse.</description><identifier>DOI: 10.48550/arxiv.2206.07360</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computers and Society ; Computer Science - Social and Information Networks</subject><creationdate>2022-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.07360$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.07360$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hafid, Salim</creatorcontrib><creatorcontrib>Schellhammer, Sebastian</creatorcontrib><creatorcontrib>Bringay, Sandra</creatorcontrib><creatorcontrib>Todorov, Konstantin</creatorcontrib><creatorcontrib>Dietze, Stefan</creatorcontrib><title>SciTweets -- A Dataset and Annotation Framework for Detecting Scientific Online Discourse</title><description>Scientific topics, claims and resources are increasingly debated as part of online discourse, where prominent examples include discourse related to COVID-19 or climate change. This has led to both significant societal impact and increased interest in scientific online discourse from various disciplines. For instance, communication studies aim at a deeper understanding of biases, quality or spreading pattern of scientific information whereas computational methods have been proposed to extract, classify or verify scientific claims using NLP and IR techniques. However, research across disciplines currently suffers from both a lack of robust definitions of the various forms of science-relatedness as well as appropriate ground truth data for distinguishing them. In this work, we contribute (a) an annotation framework and corresponding definitions for different forms of scientific relatedness of online discourse in Tweets, (b) an expert-annotated dataset of 1261 tweets obtained through our labeling framework reaching an average Fleiss Kappa $\kappa$ of 0.63, (c) a multi-label classifier trained on our data able to detect science-relatedness with 89% F1 and also able to detect distinct forms of scientific knowledge (claims, references). With this work we aim to lay the foundation for developing and evaluating robust methods for analysing science as part of large-scale online discourse.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computers and Society</subject><subject>Computer Science - Social and Information Networks</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUBWAvDKjwAEzcF0i4tuP8jFFDAalSB7IwRTfmurLaOsgxFN6eUjqd4egc6RPiTmJe1MbgA8Vv_5UrhWWOlS7xWry9Wt8fmdMMWQYtdJRo5gQU3qENYUqU_BRgFenAxynuwE0ROk5skw9bOK05JO-8hU3Y-8DQ-dlOn3HmG3HlaD_z7SUXol899svnbL15elm264zKCjNuRoMSS2fJae1qpcuqkEpJY5iVJFs5VzduRCmbEfWpkLVtJGMhm4oKrRfi_v_2bBs-oj9Q_Bn-jMPZqH8Bf65LGw</recordid><startdate>20220615</startdate><enddate>20220615</enddate><creator>Hafid, Salim</creator><creator>Schellhammer, Sebastian</creator><creator>Bringay, Sandra</creator><creator>Todorov, Konstantin</creator><creator>Dietze, Stefan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220615</creationdate><title>SciTweets -- A Dataset and Annotation Framework for Detecting Scientific Online Discourse</title><author>Hafid, Salim ; Schellhammer, Sebastian ; Bringay, Sandra ; Todorov, Konstantin ; Dietze, Stefan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-e9b50106fcaf33f823674122155ee21ac7ff89fb0119b0322118c91e04197a433</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computers and Society</topic><topic>Computer Science - Social and Information Networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Hafid, Salim</creatorcontrib><creatorcontrib>Schellhammer, Sebastian</creatorcontrib><creatorcontrib>Bringay, Sandra</creatorcontrib><creatorcontrib>Todorov, Konstantin</creatorcontrib><creatorcontrib>Dietze, Stefan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hafid, Salim</au><au>Schellhammer, Sebastian</au><au>Bringay, Sandra</au><au>Todorov, Konstantin</au><au>Dietze, Stefan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SciTweets -- A Dataset and Annotation Framework for Detecting Scientific Online Discourse</atitle><date>2022-06-15</date><risdate>2022</risdate><abstract>Scientific topics, claims and resources are increasingly debated as part of online discourse, where prominent examples include discourse related to COVID-19 or climate change. This has led to both significant societal impact and increased interest in scientific online discourse from various disciplines. For instance, communication studies aim at a deeper understanding of biases, quality or spreading pattern of scientific information whereas computational methods have been proposed to extract, classify or verify scientific claims using NLP and IR techniques. However, research across disciplines currently suffers from both a lack of robust definitions of the various forms of science-relatedness as well as appropriate ground truth data for distinguishing them. In this work, we contribute (a) an annotation framework and corresponding definitions for different forms of scientific relatedness of online discourse in Tweets, (b) an expert-annotated dataset of 1261 tweets obtained through our labeling framework reaching an average Fleiss Kappa $\kappa$ of 0.63, (c) a multi-label classifier trained on our data able to detect science-relatedness with 89% F1 and also able to detect distinct forms of scientific knowledge (claims, references). With this work we aim to lay the foundation for developing and evaluating robust methods for analysing science as part of large-scale online discourse.</abstract><doi>10.48550/arxiv.2206.07360</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2206.07360
ispartof
issn
language eng
recordid cdi_arxiv_primary_2206_07360
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Computers and Society
Computer Science - Social and Information Networks
title SciTweets -- A Dataset and Annotation Framework for Detecting Scientific Online Discourse
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T09%3A54%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SciTweets%20--%20A%20Dataset%20and%20Annotation%20Framework%20for%20Detecting%20Scientific%20Online%20Discourse&rft.au=Hafid,%20Salim&rft.date=2022-06-15&rft_id=info:doi/10.48550/arxiv.2206.07360&rft_dat=%3Carxiv_GOX%3E2206_07360%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true