Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task
Recent work has shown that language models scaled to billions of parameters, such as GPT-3, perform remarkably well in zero-shot and few-shot scenarios. In this work, we experiment with zero-shot models in the legal case entailment task of the COLIEE 2022 competition. Our experiments show that scali...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Rosa, Guilherme Moraes Bonifacio, Luiz Jeronymo, Vitor Abonizio, Hugo Lotufo, Roberto Nogueira, Rodrigo |
description | Recent work has shown that language models scaled to billions of parameters,
such as GPT-3, perform remarkably well in zero-shot and few-shot scenarios. In
this work, we experiment with zero-shot models in the legal case entailment
task of the COLIEE 2022 competition. Our experiments show that scaling the
number of parameters in a language model improves the F1 score of our previous
zero-shot result by more than 6 points, suggesting that stronger zero-shot
capability may be a characteristic of larger models, at least for this task.
Our 3B-parameter zero-shot model outperforms all models, including ensembles,
in the COLIEE 2021 test set and also achieves the best performance of a single
model in the COLIEE 2022 competition, second only to the ensemble composed of
the 3B model itself and a smaller version of the same model. Despite the
challenges posed by large language models, mainly due to latency constraints in
real-time applications, we provide a demonstration of our zero-shot monoT5-3b
model being used in production as a search engine, including for legal
documents. The code for our submission and the demo of our system are available
at https://github.com/neuralmind-ai/coliee and
https://neuralsearchx.neuralmind.ai, respectively. |
doi_str_mv | 10.48550/arxiv.2205.15172 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2205_15172</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2205_15172</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-854ddd8427d8dbfe1e17fe2d835176b7d28ee6097918d33bf98a1cf348ceb623</originalsourceid><addsrcrecordid>eNotj81OhEAQhOfiwaw-gCf7BUBm-Bu8Ia66CUYTSTyShulZJsJghtG4by-7eqmqVCWd_hi74lGYyDSNbtD9mO9QiCgNecpzcc7cnRlHM9sFZg2v6HAiT26B0hG8z84P8DyvsRnQws4Gap7QWGjcqsbu4R493kIJPS4Ei_9SB1hnPxDUtMcRqmO_tR7NOJH10ODyccHONI4LXf77hr09bJvqKahfHndVWQeY5SKQaaKUkonIlVSdJk481ySUjNfHsy5XQhJlUZEXXKo47nQhkfc6TmRPXSbiDbv-u3pibj-dmdAd2iN7e2KPfwEAr1Q4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task</title><source>arXiv.org</source><creator>Rosa, Guilherme Moraes ; Bonifacio, Luiz ; Jeronymo, Vitor ; Abonizio, Hugo ; Lotufo, Roberto ; Nogueira, Rodrigo</creator><creatorcontrib>Rosa, Guilherme Moraes ; Bonifacio, Luiz ; Jeronymo, Vitor ; Abonizio, Hugo ; Lotufo, Roberto ; Nogueira, Rodrigo</creatorcontrib><description>Recent work has shown that language models scaled to billions of parameters,
such as GPT-3, perform remarkably well in zero-shot and few-shot scenarios. In
this work, we experiment with zero-shot models in the legal case entailment
task of the COLIEE 2022 competition. Our experiments show that scaling the
number of parameters in a language model improves the F1 score of our previous
zero-shot result by more than 6 points, suggesting that stronger zero-shot
capability may be a characteristic of larger models, at least for this task.
Our 3B-parameter zero-shot model outperforms all models, including ensembles,
in the COLIEE 2021 test set and also achieves the best performance of a single
model in the COLIEE 2022 competition, second only to the ensemble composed of
the 3B model itself and a smaller version of the same model. Despite the
challenges posed by large language models, mainly due to latency constraints in
real-time applications, we provide a demonstration of our zero-shot monoT5-3b
model being used in production as a search engine, including for legal
documents. The code for our submission and the demo of our system are available
at https://github.com/neuralmind-ai/coliee and
https://neuralsearchx.neuralmind.ai, respectively.</description><identifier>DOI: 10.48550/arxiv.2205.15172</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2022-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2205.15172$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2205.15172$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Rosa, Guilherme Moraes</creatorcontrib><creatorcontrib>Bonifacio, Luiz</creatorcontrib><creatorcontrib>Jeronymo, Vitor</creatorcontrib><creatorcontrib>Abonizio, Hugo</creatorcontrib><creatorcontrib>Lotufo, Roberto</creatorcontrib><creatorcontrib>Nogueira, Rodrigo</creatorcontrib><title>Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task</title><description>Recent work has shown that language models scaled to billions of parameters,
such as GPT-3, perform remarkably well in zero-shot and few-shot scenarios. In
this work, we experiment with zero-shot models in the legal case entailment
task of the COLIEE 2022 competition. Our experiments show that scaling the
number of parameters in a language model improves the F1 score of our previous
zero-shot result by more than 6 points, suggesting that stronger zero-shot
capability may be a characteristic of larger models, at least for this task.
Our 3B-parameter zero-shot model outperforms all models, including ensembles,
in the COLIEE 2021 test set and also achieves the best performance of a single
model in the COLIEE 2022 competition, second only to the ensemble composed of
the 3B model itself and a smaller version of the same model. Despite the
challenges posed by large language models, mainly due to latency constraints in
real-time applications, we provide a demonstration of our zero-shot monoT5-3b
model being used in production as a search engine, including for legal
documents. The code for our submission and the demo of our system are available
at https://github.com/neuralmind-ai/coliee and
https://neuralsearchx.neuralmind.ai, respectively.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OhEAQhOfiwaw-gCf7BUBm-Bu8Ia66CUYTSTyShulZJsJghtG4by-7eqmqVCWd_hi74lGYyDSNbtD9mO9QiCgNecpzcc7cnRlHM9sFZg2v6HAiT26B0hG8z84P8DyvsRnQws4Gap7QWGjcqsbu4R493kIJPS4Ei_9SB1hnPxDUtMcRqmO_tR7NOJH10ODyccHONI4LXf77hr09bJvqKahfHndVWQeY5SKQaaKUkonIlVSdJk481ySUjNfHsy5XQhJlUZEXXKo47nQhkfc6TmRPXSbiDbv-u3pibj-dmdAd2iN7e2KPfwEAr1Q4</recordid><startdate>20220530</startdate><enddate>20220530</enddate><creator>Rosa, Guilherme Moraes</creator><creator>Bonifacio, Luiz</creator><creator>Jeronymo, Vitor</creator><creator>Abonizio, Hugo</creator><creator>Lotufo, Roberto</creator><creator>Nogueira, Rodrigo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220530</creationdate><title>Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task</title><author>Rosa, Guilherme Moraes ; Bonifacio, Luiz ; Jeronymo, Vitor ; Abonizio, Hugo ; Lotufo, Roberto ; Nogueira, Rodrigo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-854ddd8427d8dbfe1e17fe2d835176b7d28ee6097918d33bf98a1cf348ceb623</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Rosa, Guilherme Moraes</creatorcontrib><creatorcontrib>Bonifacio, Luiz</creatorcontrib><creatorcontrib>Jeronymo, Vitor</creatorcontrib><creatorcontrib>Abonizio, Hugo</creatorcontrib><creatorcontrib>Lotufo, Roberto</creatorcontrib><creatorcontrib>Nogueira, Rodrigo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Rosa, Guilherme Moraes</au><au>Bonifacio, Luiz</au><au>Jeronymo, Vitor</au><au>Abonizio, Hugo</au><au>Lotufo, Roberto</au><au>Nogueira, Rodrigo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task</atitle><date>2022-05-30</date><risdate>2022</risdate><abstract>Recent work has shown that language models scaled to billions of parameters,
such as GPT-3, perform remarkably well in zero-shot and few-shot scenarios. In
this work, we experiment with zero-shot models in the legal case entailment
task of the COLIEE 2022 competition. Our experiments show that scaling the
number of parameters in a language model improves the F1 score of our previous
zero-shot result by more than 6 points, suggesting that stronger zero-shot
capability may be a characteristic of larger models, at least for this task.
Our 3B-parameter zero-shot model outperforms all models, including ensembles,
in the COLIEE 2021 test set and also achieves the best performance of a single
model in the COLIEE 2022 competition, second only to the ensemble composed of
the 3B model itself and a smaller version of the same model. Despite the
challenges posed by large language models, mainly due to latency constraints in
real-time applications, we provide a demonstration of our zero-shot monoT5-3b
model being used in production as a search engine, including for legal
documents. The code for our submission and the demo of our system are available
at https://github.com/neuralmind-ai/coliee and
https://neuralsearchx.neuralmind.ai, respectively.</abstract><doi>10.48550/arxiv.2205.15172</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2205.15172 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2205_15172 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T09%3A29%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Billions%20of%20Parameters%20Are%20Worth%20More%20Than%20In-domain%20Training%20Data:%20A%20case%20study%20in%20the%20Legal%20Case%20Entailment%20Task&rft.au=Rosa,%20Guilherme%20Moraes&rft.date=2022-05-30&rft_id=info:doi/10.48550/arxiv.2205.15172&rft_dat=%3Carxiv_GOX%3E2205_15172%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |