Bounded-memory adjusted scores estimation in generalized linear models with large data sets

The widespread use of maximum Jeffreys’-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (Biometrika 108:71–82, 2021. https://doi.org/10.1093/biomet/asaa052 ), who show that the re...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Statistics and computing 2024-08, Vol.34 (4), Article 138
Hauptverfasser: Zietkiewicz, Patrick, Kosmidis, Ioannis
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 4
container_start_page
container_title Statistics and computing
container_volume 34
creator Zietkiewicz, Patrick
Kosmidis, Ioannis
description The widespread use of maximum Jeffreys’-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (Biometrika 108:71–82, 2021. https://doi.org/10.1093/biomet/asaa052 ), who show that the resulting estimates are always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically as a proportion of the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing O ( n ) quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. Both procedures can also be readily adapted to fit generalized linear models when distinct parts of the data is stored across different sites and, due to privacy concerns, cannot be fully transferred across sites. We assess the procedures through a real-data application with millions of observations.
doi_str_mv 10.1007/s11222-024-10447-z
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3071075271</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3071075271</sourcerecordid><originalsourceid>FETCH-LOGICAL-c244t-a6ee79b5ab72dd4be4d0ea3c92cdb1e5b6071683b8e4fc82b324401ffdfd3d263</originalsourceid><addsrcrecordid>eNp9kD1PwzAQhi0EEqXwB5gsMRt8thO3I1R8SZVYYGKwnPhSUiVxsR0h-usxBImN6Zbnee_uJeQc-CVwrq8igBCCcaEYcKU02x-QGRRaMpC6OCQzviw5k6DVMTmJccs5QCnVjLze-HFw6FiPvQ-f1LrtGBM6GmsfMFKMqe1tav1A24FucMBgu3afga4d0Abae4ddpB9teqOdDRukziZLI6Z4So4a20U8-51z8nJ3-7x6YOun-8fV9ZrVQqnEbImol1VhKy2cUxUqx9HKeilqVwEWVck1lAtZLVA19UJUMmscmsY1TjpRyjm5mHJ3wb-P-WKz9WMY8kojs8p1ITRkSkxUHXyMARuzC_m18GmAm-8SzVSiySWanxLNPktykmKGhw2Gv-h_rC-5AXc2</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3071075271</pqid></control><display><type>article</type><title>Bounded-memory adjusted scores estimation in generalized linear models with large data sets</title><source>Springer Nature - Complete Springer Journals</source><creator>Zietkiewicz, Patrick ; Kosmidis, Ioannis</creator><creatorcontrib>Zietkiewicz, Patrick ; Kosmidis, Ioannis</creatorcontrib><description>The widespread use of maximum Jeffreys’-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (Biometrika 108:71–82, 2021. https://doi.org/10.1093/biomet/asaa052 ), who show that the resulting estimates are always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically as a proportion of the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing O ( n ) quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. Both procedures can also be readily adapted to fit generalized linear models when distinct parts of the data is stored across different sites and, due to privacy concerns, cannot be fully transferred across sites. We assess the procedures through a real-data application with millions of observations.</description><identifier>ISSN: 0960-3174</identifier><identifier>EISSN: 1573-1375</identifier><identifier>DOI: 10.1007/s11222-024-10447-z</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Asymptotic properties ; Bias ; Computer memory ; Computer Science ; Datasets ; Estimation ; Generalized linear models ; Maximum likelihood estimates ; Maximum likelihood estimators ; Original Paper ; Parameters ; Probability and Statistics in Computer Science ; Regression analysis ; Statistical models ; Statistical Theory and Methods ; Statistics and Computing/Statistics Programs</subject><ispartof>Statistics and computing, 2024-08, Vol.34 (4), Article 138</ispartof><rights>The Author(s) 2024</rights><rights>The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c244t-a6ee79b5ab72dd4be4d0ea3c92cdb1e5b6071683b8e4fc82b324401ffdfd3d263</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11222-024-10447-z$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11222-024-10447-z$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Zietkiewicz, Patrick</creatorcontrib><creatorcontrib>Kosmidis, Ioannis</creatorcontrib><title>Bounded-memory adjusted scores estimation in generalized linear models with large data sets</title><title>Statistics and computing</title><addtitle>Stat Comput</addtitle><description>The widespread use of maximum Jeffreys’-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (Biometrika 108:71–82, 2021. https://doi.org/10.1093/biomet/asaa052 ), who show that the resulting estimates are always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically as a proportion of the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing O ( n ) quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. Both procedures can also be readily adapted to fit generalized linear models when distinct parts of the data is stored across different sites and, due to privacy concerns, cannot be fully transferred across sites. We assess the procedures through a real-data application with millions of observations.</description><subject>Artificial Intelligence</subject><subject>Asymptotic properties</subject><subject>Bias</subject><subject>Computer memory</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Estimation</subject><subject>Generalized linear models</subject><subject>Maximum likelihood estimates</subject><subject>Maximum likelihood estimators</subject><subject>Original Paper</subject><subject>Parameters</subject><subject>Probability and Statistics in Computer Science</subject><subject>Regression analysis</subject><subject>Statistical models</subject><subject>Statistical Theory and Methods</subject><subject>Statistics and Computing/Statistics Programs</subject><issn>0960-3174</issn><issn>1573-1375</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><recordid>eNp9kD1PwzAQhi0EEqXwB5gsMRt8thO3I1R8SZVYYGKwnPhSUiVxsR0h-usxBImN6Zbnee_uJeQc-CVwrq8igBCCcaEYcKU02x-QGRRaMpC6OCQzviw5k6DVMTmJccs5QCnVjLze-HFw6FiPvQ-f1LrtGBM6GmsfMFKMqe1tav1A24FucMBgu3afga4d0Abae4ddpB9teqOdDRukziZLI6Z4So4a20U8-51z8nJ3-7x6YOun-8fV9ZrVQqnEbImol1VhKy2cUxUqx9HKeilqVwEWVck1lAtZLVA19UJUMmscmsY1TjpRyjm5mHJ3wb-P-WKz9WMY8kojs8p1ITRkSkxUHXyMARuzC_m18GmAm-8SzVSiySWanxLNPktykmKGhw2Gv-h_rC-5AXc2</recordid><startdate>20240801</startdate><enddate>20240801</enddate><creator>Zietkiewicz, Patrick</creator><creator>Kosmidis, Ioannis</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20240801</creationdate><title>Bounded-memory adjusted scores estimation in generalized linear models with large data sets</title><author>Zietkiewicz, Patrick ; Kosmidis, Ioannis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c244t-a6ee79b5ab72dd4be4d0ea3c92cdb1e5b6071683b8e4fc82b324401ffdfd3d263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial Intelligence</topic><topic>Asymptotic properties</topic><topic>Bias</topic><topic>Computer memory</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Estimation</topic><topic>Generalized linear models</topic><topic>Maximum likelihood estimates</topic><topic>Maximum likelihood estimators</topic><topic>Original Paper</topic><topic>Parameters</topic><topic>Probability and Statistics in Computer Science</topic><topic>Regression analysis</topic><topic>Statistical models</topic><topic>Statistical Theory and Methods</topic><topic>Statistics and Computing/Statistics Programs</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zietkiewicz, Patrick</creatorcontrib><creatorcontrib>Kosmidis, Ioannis</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><jtitle>Statistics and computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zietkiewicz, Patrick</au><au>Kosmidis, Ioannis</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bounded-memory adjusted scores estimation in generalized linear models with large data sets</atitle><jtitle>Statistics and computing</jtitle><stitle>Stat Comput</stitle><date>2024-08-01</date><risdate>2024</risdate><volume>34</volume><issue>4</issue><artnum>138</artnum><issn>0960-3174</issn><eissn>1573-1375</eissn><abstract>The widespread use of maximum Jeffreys’-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (Biometrika 108:71–82, 2021. https://doi.org/10.1093/biomet/asaa052 ), who show that the resulting estimates are always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically as a proportion of the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing O ( n ) quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. Both procedures can also be readily adapted to fit generalized linear models when distinct parts of the data is stored across different sites and, due to privacy concerns, cannot be fully transferred across sites. We assess the procedures through a real-data application with millions of observations.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11222-024-10447-z</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0960-3174
ispartof Statistics and computing, 2024-08, Vol.34 (4), Article 138
issn 0960-3174
1573-1375
language eng
recordid cdi_proquest_journals_3071075271
source Springer Nature - Complete Springer Journals
subjects Artificial Intelligence
Asymptotic properties
Bias
Computer memory
Computer Science
Datasets
Estimation
Generalized linear models
Maximum likelihood estimates
Maximum likelihood estimators
Original Paper
Parameters
Probability and Statistics in Computer Science
Regression analysis
Statistical models
Statistical Theory and Methods
Statistics and Computing/Statistics Programs
title Bounded-memory adjusted scores estimation in generalized linear models with large data sets
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T23%3A01%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bounded-memory%20adjusted%20scores%20estimation%20in%20generalized%20linear%20models%20with%20large%20data%20sets&rft.jtitle=Statistics%20and%20computing&rft.au=Zietkiewicz,%20Patrick&rft.date=2024-08-01&rft.volume=34&rft.issue=4&rft.artnum=138&rft.issn=0960-3174&rft.eissn=1573-1375&rft_id=info:doi/10.1007/s11222-024-10447-z&rft_dat=%3Cproquest_cross%3E3071075271%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3071075271&rft_id=info:pmid/&rfr_iscdi=true