Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios
In recent years, major advancements in natural language processing (NLP) have been driven by the emergence of large language models (LLMs), which have significantly revolutionized research and development within the field. Building upon this progress, our study delves into the effects of various pre...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-05 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Türkmen, Hazal Dikenelli, Oğuz Eraslan, Cenk Çallı, Mehmet Cem Özbek, Süha Süreyya |
description | In recent years, major advancements in natural language processing (NLP) have been driven by the emergence of large language models (LLMs), which have significantly revolutionized research and development within the field. Building upon this progress, our study delves into the effects of various pre-training methodologies on Turkish clinical language models' performance in a multi-label classification task involving radiology reports, with a focus on addressing the challenges posed by limited language resources. Additionally, we evaluated the simultaneous pretraining approach by utilizing limited clinical task data for the first time. We developed four models, including TurkRadBERT-task v1, TurkRadBERT-task v2, TurkRadBERT-sim v1, and TurkRadBERT-sim v2. Our findings indicate that the general Turkish BERT model (BERTurk) and TurkRadBERT-task v1, both of which utilize knowledge from a substantial general-domain corpus, demonstrate the best overall performance. Although the task-adaptive pre-training approach has the potential to capture domain-specific patterns, it is constrained by the limited task-specific corpus and may be susceptible to overfitting. Furthermore, our results underscore the significance of domain-specific vocabulary during pre-training for enhancing model performance. Ultimately, we observe that the combination of general-domain knowledge and task-specific fine-tuning is essential for achieving optimal performance across a range of categories. This study offers valuable insights for developing effective Turkish clinical language models and can guide future research on pre-training techniques for other low-resource languages within the clinical domain. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2811359288</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2811359288</sourcerecordid><originalsourceid>FETCH-proquest_journals_28113592883</originalsourceid><addsrcrecordid>eNqNi8GKwkAQRIcFwaD-Q4NnIZkxGr256uLBg2juMsSOaU1mtHuCv2922Q_wVEW9el8q0sYkk2yqdV-NRG5xHOvZXKepiZTbWXYoQu4KoUI4-Bcy-BK-t8ccyP2Nect3kgrWNTkqbA0b31hySzgwBu7ar716PNjbokKB0jPsqaGAF9jYYOFUoLNMXoaqV9pacPSfAzX-2ebr3aRzny1KON98y65DZ50liUkXOsvMZ683fXBI_w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2811359288</pqid></control><display><type>article</type><title>Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios</title><source>Free E- Journals</source><creator>Türkmen, Hazal ; Dikenelli, Oğuz ; Eraslan, Cenk ; Çallı, Mehmet Cem ; Özbek, Süha Süreyya</creator><creatorcontrib>Türkmen, Hazal ; Dikenelli, Oğuz ; Eraslan, Cenk ; Çallı, Mehmet Cem ; Özbek, Süha Süreyya</creatorcontrib><description>In recent years, major advancements in natural language processing (NLP) have been driven by the emergence of large language models (LLMs), which have significantly revolutionized research and development within the field. Building upon this progress, our study delves into the effects of various pre-training methodologies on Turkish clinical language models' performance in a multi-label classification task involving radiology reports, with a focus on addressing the challenges posed by limited language resources. Additionally, we evaluated the simultaneous pretraining approach by utilizing limited clinical task data for the first time. We developed four models, including TurkRadBERT-task v1, TurkRadBERT-task v2, TurkRadBERT-sim v1, and TurkRadBERT-sim v2. Our findings indicate that the general Turkish BERT model (BERTurk) and TurkRadBERT-task v1, both of which utilize knowledge from a substantial general-domain corpus, demonstrate the best overall performance. Although the task-adaptive pre-training approach has the potential to capture domain-specific patterns, it is constrained by the limited task-specific corpus and may be susceptible to overfitting. Furthermore, our results underscore the significance of domain-specific vocabulary during pre-training for enhancing model performance. Ultimately, we observe that the combination of general-domain knowledge and task-specific fine-tuning is essential for achieving optimal performance across a range of categories. This study offers valuable insights for developing effective Turkish clinical language models and can guide future research on pre-training techniques for other low-resource languages within the clinical domain.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Language ; Large language models ; Natural language processing ; R&D ; Research & development ; Training</subject><ispartof>arXiv.org, 2023-05</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Türkmen, Hazal</creatorcontrib><creatorcontrib>Dikenelli, Oğuz</creatorcontrib><creatorcontrib>Eraslan, Cenk</creatorcontrib><creatorcontrib>Çallı, Mehmet Cem</creatorcontrib><creatorcontrib>Özbek, Süha Süreyya</creatorcontrib><title>Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios</title><title>arXiv.org</title><description>In recent years, major advancements in natural language processing (NLP) have been driven by the emergence of large language models (LLMs), which have significantly revolutionized research and development within the field. Building upon this progress, our study delves into the effects of various pre-training methodologies on Turkish clinical language models' performance in a multi-label classification task involving radiology reports, with a focus on addressing the challenges posed by limited language resources. Additionally, we evaluated the simultaneous pretraining approach by utilizing limited clinical task data for the first time. We developed four models, including TurkRadBERT-task v1, TurkRadBERT-task v2, TurkRadBERT-sim v1, and TurkRadBERT-sim v2. Our findings indicate that the general Turkish BERT model (BERTurk) and TurkRadBERT-task v1, both of which utilize knowledge from a substantial general-domain corpus, demonstrate the best overall performance. Although the task-adaptive pre-training approach has the potential to capture domain-specific patterns, it is constrained by the limited task-specific corpus and may be susceptible to overfitting. Furthermore, our results underscore the significance of domain-specific vocabulary during pre-training for enhancing model performance. Ultimately, we observe that the combination of general-domain knowledge and task-specific fine-tuning is essential for achieving optimal performance across a range of categories. This study offers valuable insights for developing effective Turkish clinical language models and can guide future research on pre-training techniques for other low-resource languages within the clinical domain.</description><subject>Language</subject><subject>Large language models</subject><subject>Natural language processing</subject><subject>R&D</subject><subject>Research & development</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi8GKwkAQRIcFwaD-Q4NnIZkxGr256uLBg2juMsSOaU1mtHuCv2922Q_wVEW9el8q0sYkk2yqdV-NRG5xHOvZXKepiZTbWXYoQu4KoUI4-Bcy-BK-t8ccyP2Nect3kgrWNTkqbA0b31hySzgwBu7ar716PNjbokKB0jPsqaGAF9jYYOFUoLNMXoaqV9pacPSfAzX-2ebr3aRzny1KON98y65DZ50liUkXOsvMZ683fXBI_w</recordid><startdate>20230505</startdate><enddate>20230505</enddate><creator>Türkmen, Hazal</creator><creator>Dikenelli, Oğuz</creator><creator>Eraslan, Cenk</creator><creator>Çallı, Mehmet Cem</creator><creator>Özbek, Süha Süreyya</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230505</creationdate><title>Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios</title><author>Türkmen, Hazal ; Dikenelli, Oğuz ; Eraslan, Cenk ; Çallı, Mehmet Cem ; Özbek, Süha Süreyya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28113592883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Language</topic><topic>Large language models</topic><topic>Natural language processing</topic><topic>R&D</topic><topic>Research & development</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Türkmen, Hazal</creatorcontrib><creatorcontrib>Dikenelli, Oğuz</creatorcontrib><creatorcontrib>Eraslan, Cenk</creatorcontrib><creatorcontrib>Çallı, Mehmet Cem</creatorcontrib><creatorcontrib>Özbek, Süha Süreyya</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Türkmen, Hazal</au><au>Dikenelli, Oğuz</au><au>Eraslan, Cenk</au><au>Çallı, Mehmet Cem</au><au>Özbek, Süha Süreyya</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios</atitle><jtitle>arXiv.org</jtitle><date>2023-05-05</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>In recent years, major advancements in natural language processing (NLP) have been driven by the emergence of large language models (LLMs), which have significantly revolutionized research and development within the field. Building upon this progress, our study delves into the effects of various pre-training methodologies on Turkish clinical language models' performance in a multi-label classification task involving radiology reports, with a focus on addressing the challenges posed by limited language resources. Additionally, we evaluated the simultaneous pretraining approach by utilizing limited clinical task data for the first time. We developed four models, including TurkRadBERT-task v1, TurkRadBERT-task v2, TurkRadBERT-sim v1, and TurkRadBERT-sim v2. Our findings indicate that the general Turkish BERT model (BERTurk) and TurkRadBERT-task v1, both of which utilize knowledge from a substantial general-domain corpus, demonstrate the best overall performance. Although the task-adaptive pre-training approach has the potential to capture domain-specific patterns, it is constrained by the limited task-specific corpus and may be susceptible to overfitting. Furthermore, our results underscore the significance of domain-specific vocabulary during pre-training for enhancing model performance. Ultimately, we observe that the combination of general-domain knowledge and task-specific fine-tuning is essential for achieving optimal performance across a range of categories. This study offers valuable insights for developing effective Turkish clinical language models and can guide future research on pre-training techniques for other low-resource languages within the clinical domain.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2811359288 |
source | Free E- Journals |
subjects | Language Large language models Natural language processing R&D Research & development Training |
title | Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T15%3A11%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Harnessing%20the%20Power%20of%20BERT%20in%20the%20Turkish%20Clinical%20Domain:%20Pretraining%20Approaches%20for%20Limited%20Data%20Scenarios&rft.jtitle=arXiv.org&rft.au=T%C3%BCrkmen,%20Hazal&rft.date=2023-05-05&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2811359288%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2811359288&rft_id=info:pmid/&rfr_iscdi=true |