Finding Inverse Document Frequency Information in BERT

For many decades, BM25 and its variants have been the dominant document retrieval approach, where their two underlying features are Term Frequency (TF) and Inverse Document Frequency (IDF). The traditional approach, however, is being rapidly replaced by Neural Ranking Models (NRMs) that can exploit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Choi, Jaekeol, Jung, Euna, Lim, Sungjun, Rhee, Wonjong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Choi, Jaekeol
Jung, Euna
Lim, Sungjun
Rhee, Wonjong
description For many decades, BM25 and its variants have been the dominant document retrieval approach, where their two underlying features are Term Frequency (TF) and Inverse Document Frequency (IDF). The traditional approach, however, is being rapidly replaced by Neural Ranking Models (NRMs) that can exploit semantic features. In this work, we consider BERT-based NRMs and study if IDF information is present in the NRMs. This simple question is interesting because IDF has been indispensable for the traditional lexical matching, but global features like IDF are not explicitly learned by neural language models including BERT. We adopt linear probing as the main analysis tool because typical BERT based NRMs utilize linear or inner-product based score aggregators. We analyze input embeddings, representations of all BERT layers, and the self-attention weights of CLS. By studying MS-MARCO dataset with three BERT-based models, we show that all of them contain information that is strongly dependent on IDF.
doi_str_mv 10.48550/arxiv.2202.12191
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2202_12191</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2202_12191</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-dd2e91088ac1d8614df05e90078676302955cbce9ad30fd6d09f0cc4f53f909a3</originalsourceid><addsrcrecordid>eNotj8tuwjAUBb3pAtF-AKv6BxKu7dixl4USQEJCqrKPXPu6stQ4YB4qf19eq1kcaXSGkAmDstJSwtTmv3guOQdeMs4MGxHVxORj-qHrdMZ8QPo5uFOP6UibjPsTJne5TmHIvT3GIdGY6Gzx1b6Sl2B_D_j25Ji0zaKdr4rNdrmef2wKq2pWeM_RMNDaOua1YpUPINEA1FrVSgA3Urpvh8Z6AcErDyaAc1WQIhgwVozJ-0N7P97tcuxtvnS3gO4eIP4Bp88_hA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Finding Inverse Document Frequency Information in BERT</title><source>arXiv.org</source><creator>Choi, Jaekeol ; Jung, Euna ; Lim, Sungjun ; Rhee, Wonjong</creator><creatorcontrib>Choi, Jaekeol ; Jung, Euna ; Lim, Sungjun ; Rhee, Wonjong</creatorcontrib><description>For many decades, BM25 and its variants have been the dominant document retrieval approach, where their two underlying features are Term Frequency (TF) and Inverse Document Frequency (IDF). The traditional approach, however, is being rapidly replaced by Neural Ranking Models (NRMs) that can exploit semantic features. In this work, we consider BERT-based NRMs and study if IDF information is present in the NRMs. This simple question is interesting because IDF has been indispensable for the traditional lexical matching, but global features like IDF are not explicitly learned by neural language models including BERT. We adopt linear probing as the main analysis tool because typical BERT based NRMs utilize linear or inner-product based score aggregators. We analyze input embeddings, representations of all BERT layers, and the self-attention weights of CLS. By studying MS-MARCO dataset with three BERT-based models, we show that all of them contain information that is strongly dependent on IDF.</description><identifier>DOI: 10.48550/arxiv.2202.12191</identifier><language>eng</language><subject>Computer Science - Information Retrieval</subject><creationdate>2022-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2202.12191$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2202.12191$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Choi, Jaekeol</creatorcontrib><creatorcontrib>Jung, Euna</creatorcontrib><creatorcontrib>Lim, Sungjun</creatorcontrib><creatorcontrib>Rhee, Wonjong</creatorcontrib><title>Finding Inverse Document Frequency Information in BERT</title><description>For many decades, BM25 and its variants have been the dominant document retrieval approach, where their two underlying features are Term Frequency (TF) and Inverse Document Frequency (IDF). The traditional approach, however, is being rapidly replaced by Neural Ranking Models (NRMs) that can exploit semantic features. In this work, we consider BERT-based NRMs and study if IDF information is present in the NRMs. This simple question is interesting because IDF has been indispensable for the traditional lexical matching, but global features like IDF are not explicitly learned by neural language models including BERT. We adopt linear probing as the main analysis tool because typical BERT based NRMs utilize linear or inner-product based score aggregators. We analyze input embeddings, representations of all BERT layers, and the self-attention weights of CLS. By studying MS-MARCO dataset with three BERT-based models, we show that all of them contain information that is strongly dependent on IDF.</description><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tuwjAUBb3pAtF-AKv6BxKu7dixl4USQEJCqrKPXPu6stQ4YB4qf19eq1kcaXSGkAmDstJSwtTmv3guOQdeMs4MGxHVxORj-qHrdMZ8QPo5uFOP6UibjPsTJne5TmHIvT3GIdGY6Gzx1b6Sl2B_D_j25Ji0zaKdr4rNdrmef2wKq2pWeM_RMNDaOua1YpUPINEA1FrVSgA3Urpvh8Z6AcErDyaAc1WQIhgwVozJ-0N7P97tcuxtvnS3gO4eIP4Bp88_hA</recordid><startdate>20220224</startdate><enddate>20220224</enddate><creator>Choi, Jaekeol</creator><creator>Jung, Euna</creator><creator>Lim, Sungjun</creator><creator>Rhee, Wonjong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220224</creationdate><title>Finding Inverse Document Frequency Information in BERT</title><author>Choi, Jaekeol ; Jung, Euna ; Lim, Sungjun ; Rhee, Wonjong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-dd2e91088ac1d8614df05e90078676302955cbce9ad30fd6d09f0cc4f53f909a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Choi, Jaekeol</creatorcontrib><creatorcontrib>Jung, Euna</creatorcontrib><creatorcontrib>Lim, Sungjun</creatorcontrib><creatorcontrib>Rhee, Wonjong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Choi, Jaekeol</au><au>Jung, Euna</au><au>Lim, Sungjun</au><au>Rhee, Wonjong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Finding Inverse Document Frequency Information in BERT</atitle><date>2022-02-24</date><risdate>2022</risdate><abstract>For many decades, BM25 and its variants have been the dominant document retrieval approach, where their two underlying features are Term Frequency (TF) and Inverse Document Frequency (IDF). The traditional approach, however, is being rapidly replaced by Neural Ranking Models (NRMs) that can exploit semantic features. In this work, we consider BERT-based NRMs and study if IDF information is present in the NRMs. This simple question is interesting because IDF has been indispensable for the traditional lexical matching, but global features like IDF are not explicitly learned by neural language models including BERT. We adopt linear probing as the main analysis tool because typical BERT based NRMs utilize linear or inner-product based score aggregators. We analyze input embeddings, representations of all BERT layers, and the self-attention weights of CLS. By studying MS-MARCO dataset with three BERT-based models, we show that all of them contain information that is strongly dependent on IDF.</abstract><doi>10.48550/arxiv.2202.12191</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2202.12191
ispartof
issn
language eng
recordid cdi_arxiv_primary_2202_12191
source arXiv.org
subjects Computer Science - Information Retrieval
title Finding Inverse Document Frequency Information in BERT
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T18%3A37%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Finding%20Inverse%20Document%20Frequency%20Information%20in%20BERT&rft.au=Choi,%20Jaekeol&rft.date=2022-02-24&rft_id=info:doi/10.48550/arxiv.2202.12191&rft_dat=%3Carxiv_GOX%3E2202_12191%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true