Good Data, Large Data, or No Data? Comparing Three Approaches in Developing Research Aspect Classifiers for Biomedical Papers

The rapid growth of scientific publications, particularly during the COVID-19 pandemic, emphasizes the need for tools to help researchers efficiently comprehend the latest advancements. One essential part of understanding scientific literature is research aspect classification, which categorizes sen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chandrasekhar, Shreya, Huang, Chieh-Yang, Huang, Ting-Hao 'Kenneth'
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chandrasekhar, Shreya
Huang, Chieh-Yang
Huang, Ting-Hao 'Kenneth'
description The rapid growth of scientific publications, particularly during the COVID-19 pandemic, emphasizes the need for tools to help researchers efficiently comprehend the latest advancements. One essential part of understanding scientific literature is research aspect classification, which categorizes sentences in abstracts to Background, Purpose, Method, and Finding. In this study, we investigate the impact of different datasets on model performance for the crowd-annotated CODA-19 research aspect classification task. Specifically, we explore the potential benefits of using the large, automatically curated PubMed 200K RCT dataset and evaluate the effectiveness of large language models (LLMs), such as LLaMA, GPT-3, ChatGPT, and GPT-4. Our results indicate that using the PubMed 200K RCT dataset does not improve performance for the CODA-19 task. We also observe that while GPT-4 performs well, it does not outperform the SciBERT model fine-tuned on the CODA-19 dataset, emphasizing the importance of a dedicated and task-aligned datasets dataset for the target task. Our code is available at https://github.com/Crowd-AI-Lab/CODA-19-exp.
doi_str_mv 10.48550/arxiv.2306.04820
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_04820</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_04820</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-444554a1d63be0304dc9457a4bc53298645521da15c720680206f8bf196dec8e3</originalsourceid><addsrcrecordid>eNotUMFOwzAU64UDGnwAJ_IBtKRN0qYnNDrYkCpAqPfqNXldI7VLlEwTHPh3urKLbdmSJTuK7lKacCkEfQT_bU5JxmieUC4zeh39bq3VZANHeCA1-D1etPXk3S76iVR2cuDNYU-awSOStXPeghowEHMgGzzhaN05_sKA4NVA1sGhOpJqhBBMb9AH0s-Nz8ZOqI2CkXyCm92b6KqHMeDthVdR8_rSVLu4_ti-Ves6hrygMedcCA6pzlmHlFGuVclFAbxTgmWlzOc4SzWkQhUZzSWdoZddn5a5RiWRraL7_9plf-u8mcD_tOcf2uUH9gf4xVdY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Good Data, Large Data, or No Data? Comparing Three Approaches in Developing Research Aspect Classifiers for Biomedical Papers</title><source>arXiv.org</source><creator>Chandrasekhar, Shreya ; Huang, Chieh-Yang ; Huang, Ting-Hao 'Kenneth'</creator><creatorcontrib>Chandrasekhar, Shreya ; Huang, Chieh-Yang ; Huang, Ting-Hao 'Kenneth'</creatorcontrib><description>The rapid growth of scientific publications, particularly during the COVID-19 pandemic, emphasizes the need for tools to help researchers efficiently comprehend the latest advancements. One essential part of understanding scientific literature is research aspect classification, which categorizes sentences in abstracts to Background, Purpose, Method, and Finding. In this study, we investigate the impact of different datasets on model performance for the crowd-annotated CODA-19 research aspect classification task. Specifically, we explore the potential benefits of using the large, automatically curated PubMed 200K RCT dataset and evaluate the effectiveness of large language models (LLMs), such as LLaMA, GPT-3, ChatGPT, and GPT-4. Our results indicate that using the PubMed 200K RCT dataset does not improve performance for the CODA-19 task. We also observe that while GPT-4 performs well, it does not outperform the SciBERT model fine-tuned on the CODA-19 dataset, emphasizing the importance of a dedicated and task-aligned datasets dataset for the target task. Our code is available at https://github.com/Crowd-AI-Lab/CODA-19-exp.</description><identifier>DOI: 10.48550/arxiv.2306.04820</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.04820$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.04820$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chandrasekhar, Shreya</creatorcontrib><creatorcontrib>Huang, Chieh-Yang</creatorcontrib><creatorcontrib>Huang, Ting-Hao 'Kenneth'</creatorcontrib><title>Good Data, Large Data, or No Data? Comparing Three Approaches in Developing Research Aspect Classifiers for Biomedical Papers</title><description>The rapid growth of scientific publications, particularly during the COVID-19 pandemic, emphasizes the need for tools to help researchers efficiently comprehend the latest advancements. One essential part of understanding scientific literature is research aspect classification, which categorizes sentences in abstracts to Background, Purpose, Method, and Finding. In this study, we investigate the impact of different datasets on model performance for the crowd-annotated CODA-19 research aspect classification task. Specifically, we explore the potential benefits of using the large, automatically curated PubMed 200K RCT dataset and evaluate the effectiveness of large language models (LLMs), such as LLaMA, GPT-3, ChatGPT, and GPT-4. Our results indicate that using the PubMed 200K RCT dataset does not improve performance for the CODA-19 task. We also observe that while GPT-4 performs well, it does not outperform the SciBERT model fine-tuned on the CODA-19 dataset, emphasizing the importance of a dedicated and task-aligned datasets dataset for the target task. Our code is available at https://github.com/Crowd-AI-Lab/CODA-19-exp.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotUMFOwzAU64UDGnwAJ_IBtKRN0qYnNDrYkCpAqPfqNXldI7VLlEwTHPh3urKLbdmSJTuK7lKacCkEfQT_bU5JxmieUC4zeh39bq3VZANHeCA1-D1etPXk3S76iVR2cuDNYU-awSOStXPeghowEHMgGzzhaN05_sKA4NVA1sGhOpJqhBBMb9AH0s-Nz8ZOqI2CkXyCm92b6KqHMeDthVdR8_rSVLu4_ti-Ves6hrygMedcCA6pzlmHlFGuVclFAbxTgmWlzOc4SzWkQhUZzSWdoZddn5a5RiWRraL7_9plf-u8mcD_tOcf2uUH9gf4xVdY</recordid><startdate>20230607</startdate><enddate>20230607</enddate><creator>Chandrasekhar, Shreya</creator><creator>Huang, Chieh-Yang</creator><creator>Huang, Ting-Hao 'Kenneth'</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230607</creationdate><title>Good Data, Large Data, or No Data? Comparing Three Approaches in Developing Research Aspect Classifiers for Biomedical Papers</title><author>Chandrasekhar, Shreya ; Huang, Chieh-Yang ; Huang, Ting-Hao 'Kenneth'</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-444554a1d63be0304dc9457a4bc53298645521da15c720680206f8bf196dec8e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Chandrasekhar, Shreya</creatorcontrib><creatorcontrib>Huang, Chieh-Yang</creatorcontrib><creatorcontrib>Huang, Ting-Hao 'Kenneth'</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chandrasekhar, Shreya</au><au>Huang, Chieh-Yang</au><au>Huang, Ting-Hao 'Kenneth'</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Good Data, Large Data, or No Data? Comparing Three Approaches in Developing Research Aspect Classifiers for Biomedical Papers</atitle><date>2023-06-07</date><risdate>2023</risdate><abstract>The rapid growth of scientific publications, particularly during the COVID-19 pandemic, emphasizes the need for tools to help researchers efficiently comprehend the latest advancements. One essential part of understanding scientific literature is research aspect classification, which categorizes sentences in abstracts to Background, Purpose, Method, and Finding. In this study, we investigate the impact of different datasets on model performance for the crowd-annotated CODA-19 research aspect classification task. Specifically, we explore the potential benefits of using the large, automatically curated PubMed 200K RCT dataset and evaluate the effectiveness of large language models (LLMs), such as LLaMA, GPT-3, ChatGPT, and GPT-4. Our results indicate that using the PubMed 200K RCT dataset does not improve performance for the CODA-19 task. We also observe that while GPT-4 performs well, it does not outperform the SciBERT model fine-tuned on the CODA-19 dataset, emphasizing the importance of a dedicated and task-aligned datasets dataset for the target task. Our code is available at https://github.com/Crowd-AI-Lab/CODA-19-exp.</abstract><doi>10.48550/arxiv.2306.04820</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2306.04820
ispartof
issn
language eng
recordid cdi_arxiv_primary_2306_04820
source arXiv.org
subjects Computer Science - Computation and Language
title Good Data, Large Data, or No Data? Comparing Three Approaches in Developing Research Aspect Classifiers for Biomedical Papers
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T20%3A41%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Good%20Data,%20Large%20Data,%20or%20No%20Data?%20Comparing%20Three%20Approaches%20in%20Developing%20Research%20Aspect%20Classifiers%20for%20Biomedical%20Papers&rft.au=Chandrasekhar,%20Shreya&rft.date=2023-06-07&rft_id=info:doi/10.48550/arxiv.2306.04820&rft_dat=%3Carxiv_GOX%3E2306_04820%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true