PReLCaP : Precedence Retrieval from Legal Documents Using Catch Phrases

Precedence retrieval is the process of retrieving similar prior case documents for the given current case document in the legal domain. Referencing the prior cases is important to ensure that an identical situation is treated similarly in all the cases. Concise representation of case documents using...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural processing letters 2022-10, Vol.54 (5), p.3873-3891
Hauptverfasser: Sampath, Kayalvizhi, Durairaj, Thenmozhi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3891
container_issue 5
container_start_page 3873
container_title Neural processing letters
container_volume 54
creator Sampath, Kayalvizhi
Durairaj, Thenmozhi
description Precedence retrieval is the process of retrieving similar prior case documents for the given current case document in the legal domain. Referencing the prior cases is important to ensure that an identical situation is treated similarly in all the cases. Concise representation of case documents using catch phrases facilitates the practitioners to avoid spending more time on reading the whole documents for finding the prior cases. The existing approaches for precedent retrieval in the legal domain use either statistical or semantic similarity features to find the prior cases. However, the substruction similarity features that consider the context of the statement helps to correctly identify the prior cases. Further, the existing approaches consider the whole document while extracting the similarity features, which is time-consuming. In this paper, we propose to use a combination of statistical, semantic, and substruction similarity features that are extracted from the catch phrases of the legal documents. The catch phrases from legal documents are extracted by utilizing Sequence-to-Sequence deep neural network with stacked encoder-decoder and Long Short Term Memory (LSTM) as the recurrent unit. The substruction similarity features are obtained using a convolutional neural network. The IRLeD@FIRE-2017 dataset is used for evaluating our approach. The experimental results show that considering catch phrases reduces the retrieval time without reducing the retrieval performance. The k -paired t -test also shows that the improvement in performance of the model by using substruction similarity features that are extracted from the catch phrases is statistically significant when compared with other models. The PReLCaP outperforms state-of-the-art approaches with the MAP score of 0.632 on test data.
doi_str_mv 10.1007/s11063-022-10791-z
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2918348475</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2918348475</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-7e8284695104298b08ef3dc2df7593562feac9522c3de5d7a0d0780ea0b3c95c3</originalsourceid><addsrcrecordid>eNp9kE1LxDAQhoMouK7-AU8Bz9FJ0jStN6m6CgXL4oK3kE2n-8FuuyZdwf31Rit48zQvw_vMwEPIJYdrDqBvAueQSgZCMA465-xwREZcacm0lm_HMUsNLEkFPyVnIawBIiZgRCbVFMvCVvSWVh4d1tg6pFPs_Qo_7IY2vtvSEhcx3nduv8W2D3QWVu2CFrZ3S1otvQ0YzslJYzcBL37nmMweH16LJ1a-TJ6Lu5I5yfOeacxElqS54pCIPJtDho2snagbrXKpUtGgdbkSwskaVa0t1KAzQAtzGfdOjsnVcHfnu_c9ht6su71v40sjcp7JJEu0ii0xtJzvQvDYmJ1fba3_NBzMtzAzCDNRmPkRZg4RkgMUYrldoP87_Q_1BQ_IbN4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918348475</pqid></control><display><type>article</type><title>PReLCaP : Precedence Retrieval from Legal Documents Using Catch Phrases</title><source>SpringerLink Journals</source><source>ProQuest Central UK/Ireland</source><source>ProQuest Central</source><creator>Sampath, Kayalvizhi ; Durairaj, Thenmozhi</creator><creatorcontrib>Sampath, Kayalvizhi ; Durairaj, Thenmozhi</creatorcontrib><description>Precedence retrieval is the process of retrieving similar prior case documents for the given current case document in the legal domain. Referencing the prior cases is important to ensure that an identical situation is treated similarly in all the cases. Concise representation of case documents using catch phrases facilitates the practitioners to avoid spending more time on reading the whole documents for finding the prior cases. The existing approaches for precedent retrieval in the legal domain use either statistical or semantic similarity features to find the prior cases. However, the substruction similarity features that consider the context of the statement helps to correctly identify the prior cases. Further, the existing approaches consider the whole document while extracting the similarity features, which is time-consuming. In this paper, we propose to use a combination of statistical, semantic, and substruction similarity features that are extracted from the catch phrases of the legal documents. The catch phrases from legal documents are extracted by utilizing Sequence-to-Sequence deep neural network with stacked encoder-decoder and Long Short Term Memory (LSTM) as the recurrent unit. The substruction similarity features are obtained using a convolutional neural network. The IRLeD@FIRE-2017 dataset is used for evaluating our approach. The experimental results show that considering catch phrases reduces the retrieval time without reducing the retrieval performance. The k -paired t -test also shows that the improvement in performance of the model by using substruction similarity features that are extracted from the catch phrases is statistically significant when compared with other models. The PReLCaP outperforms state-of-the-art approaches with the MAP score of 0.632 on test data.</description><identifier>ISSN: 1370-4621</identifier><identifier>EISSN: 1573-773X</identifier><identifier>DOI: 10.1007/s11063-022-10791-z</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Artificial neural networks ; Complex Systems ; Computational Intelligence ; Computer Science ; Datasets ; Documents ; Encoders-Decoders ; Information retrieval ; Legal documents ; Legal research ; Neural networks ; Retrieval ; Retrieval performance measures ; Semantics ; Similarity ; Vector space</subject><ispartof>Neural processing letters, 2022-10, Vol.54 (5), p.3873-3891</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-7e8284695104298b08ef3dc2df7593562feac9522c3de5d7a0d0780ea0b3c95c3</citedby><cites>FETCH-LOGICAL-c319t-7e8284695104298b08ef3dc2df7593562feac9522c3de5d7a0d0780ea0b3c95c3</cites><orcidid>0000-0002-5417-9910</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11063-022-10791-z$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2918348475?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,777,781,21369,27905,27906,33725,41469,42538,43786,51300,64364,64368,72218</link.rule.ids></links><search><creatorcontrib>Sampath, Kayalvizhi</creatorcontrib><creatorcontrib>Durairaj, Thenmozhi</creatorcontrib><title>PReLCaP : Precedence Retrieval from Legal Documents Using Catch Phrases</title><title>Neural processing letters</title><addtitle>Neural Process Lett</addtitle><description>Precedence retrieval is the process of retrieving similar prior case documents for the given current case document in the legal domain. Referencing the prior cases is important to ensure that an identical situation is treated similarly in all the cases. Concise representation of case documents using catch phrases facilitates the practitioners to avoid spending more time on reading the whole documents for finding the prior cases. The existing approaches for precedent retrieval in the legal domain use either statistical or semantic similarity features to find the prior cases. However, the substruction similarity features that consider the context of the statement helps to correctly identify the prior cases. Further, the existing approaches consider the whole document while extracting the similarity features, which is time-consuming. In this paper, we propose to use a combination of statistical, semantic, and substruction similarity features that are extracted from the catch phrases of the legal documents. The catch phrases from legal documents are extracted by utilizing Sequence-to-Sequence deep neural network with stacked encoder-decoder and Long Short Term Memory (LSTM) as the recurrent unit. The substruction similarity features are obtained using a convolutional neural network. The IRLeD@FIRE-2017 dataset is used for evaluating our approach. The experimental results show that considering catch phrases reduces the retrieval time without reducing the retrieval performance. The k -paired t -test also shows that the improvement in performance of the model by using substruction similarity features that are extracted from the catch phrases is statistically significant when compared with other models. The PReLCaP outperforms state-of-the-art approaches with the MAP score of 0.632 on test data.</description><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Complex Systems</subject><subject>Computational Intelligence</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Documents</subject><subject>Encoders-Decoders</subject><subject>Information retrieval</subject><subject>Legal documents</subject><subject>Legal research</subject><subject>Neural networks</subject><subject>Retrieval</subject><subject>Retrieval performance measures</subject><subject>Semantics</subject><subject>Similarity</subject><subject>Vector space</subject><issn>1370-4621</issn><issn>1573-773X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kE1LxDAQhoMouK7-AU8Bz9FJ0jStN6m6CgXL4oK3kE2n-8FuuyZdwf31Rit48zQvw_vMwEPIJYdrDqBvAueQSgZCMA465-xwREZcacm0lm_HMUsNLEkFPyVnIawBIiZgRCbVFMvCVvSWVh4d1tg6pFPs_Qo_7IY2vtvSEhcx3nduv8W2D3QWVu2CFrZ3S1otvQ0YzslJYzcBL37nmMweH16LJ1a-TJ6Lu5I5yfOeacxElqS54pCIPJtDho2snagbrXKpUtGgdbkSwskaVa0t1KAzQAtzGfdOjsnVcHfnu_c9ht6su71v40sjcp7JJEu0ii0xtJzvQvDYmJ1fba3_NBzMtzAzCDNRmPkRZg4RkgMUYrldoP87_Q_1BQ_IbN4</recordid><startdate>20221001</startdate><enddate>20221001</enddate><creator>Sampath, Kayalvizhi</creator><creator>Durairaj, Thenmozhi</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><orcidid>https://orcid.org/0000-0002-5417-9910</orcidid></search><sort><creationdate>20221001</creationdate><title>PReLCaP : Precedence Retrieval from Legal Documents Using Catch Phrases</title><author>Sampath, Kayalvizhi ; Durairaj, Thenmozhi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-7e8284695104298b08ef3dc2df7593562feac9522c3de5d7a0d0780ea0b3c95c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Complex Systems</topic><topic>Computational Intelligence</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Documents</topic><topic>Encoders-Decoders</topic><topic>Information retrieval</topic><topic>Legal documents</topic><topic>Legal research</topic><topic>Neural networks</topic><topic>Retrieval</topic><topic>Retrieval performance measures</topic><topic>Semantics</topic><topic>Similarity</topic><topic>Vector space</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sampath, Kayalvizhi</creatorcontrib><creatorcontrib>Durairaj, Thenmozhi</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><jtitle>Neural processing letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sampath, Kayalvizhi</au><au>Durairaj, Thenmozhi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>PReLCaP : Precedence Retrieval from Legal Documents Using Catch Phrases</atitle><jtitle>Neural processing letters</jtitle><stitle>Neural Process Lett</stitle><date>2022-10-01</date><risdate>2022</risdate><volume>54</volume><issue>5</issue><spage>3873</spage><epage>3891</epage><pages>3873-3891</pages><issn>1370-4621</issn><eissn>1573-773X</eissn><abstract>Precedence retrieval is the process of retrieving similar prior case documents for the given current case document in the legal domain. Referencing the prior cases is important to ensure that an identical situation is treated similarly in all the cases. Concise representation of case documents using catch phrases facilitates the practitioners to avoid spending more time on reading the whole documents for finding the prior cases. The existing approaches for precedent retrieval in the legal domain use either statistical or semantic similarity features to find the prior cases. However, the substruction similarity features that consider the context of the statement helps to correctly identify the prior cases. Further, the existing approaches consider the whole document while extracting the similarity features, which is time-consuming. In this paper, we propose to use a combination of statistical, semantic, and substruction similarity features that are extracted from the catch phrases of the legal documents. The catch phrases from legal documents are extracted by utilizing Sequence-to-Sequence deep neural network with stacked encoder-decoder and Long Short Term Memory (LSTM) as the recurrent unit. The substruction similarity features are obtained using a convolutional neural network. The IRLeD@FIRE-2017 dataset is used for evaluating our approach. The experimental results show that considering catch phrases reduces the retrieval time without reducing the retrieval performance. The k -paired t -test also shows that the improvement in performance of the model by using substruction similarity features that are extracted from the catch phrases is statistically significant when compared with other models. The PReLCaP outperforms state-of-the-art approaches with the MAP score of 0.632 on test data.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11063-022-10791-z</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0002-5417-9910</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1370-4621
ispartof Neural processing letters, 2022-10, Vol.54 (5), p.3873-3891
issn 1370-4621
1573-773X
language eng
recordid cdi_proquest_journals_2918348475
source SpringerLink Journals; ProQuest Central UK/Ireland; ProQuest Central
subjects Artificial Intelligence
Artificial neural networks
Complex Systems
Computational Intelligence
Computer Science
Datasets
Documents
Encoders-Decoders
Information retrieval
Legal documents
Legal research
Neural networks
Retrieval
Retrieval performance measures
Semantics
Similarity
Vector space
title PReLCaP : Precedence Retrieval from Legal Documents Using Catch Phrases
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T02%3A14%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=PReLCaP%20:%20Precedence%20Retrieval%20from%20Legal%20Documents%20Using%20Catch%20Phrases&rft.jtitle=Neural%20processing%20letters&rft.au=Sampath,%20Kayalvizhi&rft.date=2022-10-01&rft.volume=54&rft.issue=5&rft.spage=3873&rft.epage=3891&rft.pages=3873-3891&rft.issn=1370-4621&rft.eissn=1573-773X&rft_id=info:doi/10.1007/s11063-022-10791-z&rft_dat=%3Cproquest_cross%3E2918348475%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918348475&rft_id=info:pmid/&rfr_iscdi=true