Towards Stable Test-Time Adaptation in Dynamic Wild World
Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples. However, the online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being depl...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-02 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Niu, Shuaicheng Wu, Jiaxiang Zhang, Yifan Wen, Zhiquan Chen, Yaofo Zhao, Peilin Tan, Mingkui |
description | Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples. However, the online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world. Specifically, TTA may fail to improve or even harm the model performance when test data have: 1) mixed distribution shifts, 2) small batch sizes, and 3) online imbalanced label distribution shifts, which are quite common in practice. In this paper, we investigate the unstable reasons and find that the batch norm layer is a crucial factor hindering TTA stability. Conversely, TTA can perform more stably with batch-agnostic norm layers, \ie, group or layer norm. However, we observe that TTA with group and layer norms does not always succeed and still suffers many failure cases. By digging into the failure cases, we find that certain noisy test samples with large gradients may disturb the model adaption and result in collapsed trivial solutions, \ie, assigning the same class label for all samples. To address the above collapse issue, we propose a sharpness-aware and reliable entropy minimization method, called SAR, for further stabilizing TTA from two aspects: 1) remove partial noisy samples with large gradients, 2) encourage model weights to go to a flat minimum so that the model is robust to the remaining noisy samples. Promising results demonstrate that SAR performs more stably over prior methods and is computationally efficient under the above wild test scenarios. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2780248585</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2780248585</sourcerecordid><originalsourceid>FETCH-proquest_journals_27802485853</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwDMkvTyxKKVYILklMyklVCEktLtENycxNVXBMSSwoSSzJzM9TyMxTcKnMS8zNTFYIz8xJUQjPL8pJ4WFgTUvMKU7lhdLcDMpuriHOHroFRfmFpUBj4rPyS4vygFLxRuYWBkYmFqYWpsbEqQIANkw11A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2780248585</pqid></control><display><type>article</type><title>Towards Stable Test-Time Adaptation in Dynamic Wild World</title><source>Free E- Journals</source><creator>Niu, Shuaicheng ; Wu, Jiaxiang ; Zhang, Yifan ; Wen, Zhiquan ; Chen, Yaofo ; Zhao, Peilin ; Tan, Mingkui</creator><creatorcontrib>Niu, Shuaicheng ; Wu, Jiaxiang ; Zhang, Yifan ; Wen, Zhiquan ; Chen, Yaofo ; Zhao, Peilin ; Tan, Mingkui</creatorcontrib><description>Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples. However, the online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world. Specifically, TTA may fail to improve or even harm the model performance when test data have: 1) mixed distribution shifts, 2) small batch sizes, and 3) online imbalanced label distribution shifts, which are quite common in practice. In this paper, we investigate the unstable reasons and find that the batch norm layer is a crucial factor hindering TTA stability. Conversely, TTA can perform more stably with batch-agnostic norm layers, \ie, group or layer norm. However, we observe that TTA with group and layer norms does not always succeed and still suffers many failure cases. By digging into the failure cases, we find that certain noisy test samples with large gradients may disturb the model adaption and result in collapsed trivial solutions, \ie, assigning the same class label for all samples. To address the above collapse issue, we propose a sharpness-aware and reliable entropy minimization method, called SAR, for further stabilizing TTA from two aspects: 1) remove partial noisy samples with large gradients, 2) encourage model weights to go to a flat minimum so that the model is robust to the remaining noisy samples. Promising results demonstrate that SAR performs more stably over prior methods and is computationally efficient under the above wild test scenarios.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptation ; Model testing ; Model updating ; Norms ; Testing time</subject><ispartof>arXiv.org, 2023-02</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>778,782</link.rule.ids></links><search><creatorcontrib>Niu, Shuaicheng</creatorcontrib><creatorcontrib>Wu, Jiaxiang</creatorcontrib><creatorcontrib>Zhang, Yifan</creatorcontrib><creatorcontrib>Wen, Zhiquan</creatorcontrib><creatorcontrib>Chen, Yaofo</creatorcontrib><creatorcontrib>Zhao, Peilin</creatorcontrib><creatorcontrib>Tan, Mingkui</creatorcontrib><title>Towards Stable Test-Time Adaptation in Dynamic Wild World</title><title>arXiv.org</title><description>Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples. However, the online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world. Specifically, TTA may fail to improve or even harm the model performance when test data have: 1) mixed distribution shifts, 2) small batch sizes, and 3) online imbalanced label distribution shifts, which are quite common in practice. In this paper, we investigate the unstable reasons and find that the batch norm layer is a crucial factor hindering TTA stability. Conversely, TTA can perform more stably with batch-agnostic norm layers, \ie, group or layer norm. However, we observe that TTA with group and layer norms does not always succeed and still suffers many failure cases. By digging into the failure cases, we find that certain noisy test samples with large gradients may disturb the model adaption and result in collapsed trivial solutions, \ie, assigning the same class label for all samples. To address the above collapse issue, we propose a sharpness-aware and reliable entropy minimization method, called SAR, for further stabilizing TTA from two aspects: 1) remove partial noisy samples with large gradients, 2) encourage model weights to go to a flat minimum so that the model is robust to the remaining noisy samples. Promising results demonstrate that SAR performs more stably over prior methods and is computationally efficient under the above wild test scenarios.</description><subject>Adaptation</subject><subject>Model testing</subject><subject>Model updating</subject><subject>Norms</subject><subject>Testing time</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwDMkvTyxKKVYILklMyklVCEktLtENycxNVXBMSSwoSSzJzM9TyMxTcKnMS8zNTFYIz8xJUQjPL8pJ4WFgTUvMKU7lhdLcDMpuriHOHroFRfmFpUBj4rPyS4vygFLxRuYWBkYmFqYWpsbEqQIANkw11A</recordid><startdate>20230224</startdate><enddate>20230224</enddate><creator>Niu, Shuaicheng</creator><creator>Wu, Jiaxiang</creator><creator>Zhang, Yifan</creator><creator>Wen, Zhiquan</creator><creator>Chen, Yaofo</creator><creator>Zhao, Peilin</creator><creator>Tan, Mingkui</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230224</creationdate><title>Towards Stable Test-Time Adaptation in Dynamic Wild World</title><author>Niu, Shuaicheng ; Wu, Jiaxiang ; Zhang, Yifan ; Wen, Zhiquan ; Chen, Yaofo ; Zhao, Peilin ; Tan, Mingkui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27802485853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptation</topic><topic>Model testing</topic><topic>Model updating</topic><topic>Norms</topic><topic>Testing time</topic><toplevel>online_resources</toplevel><creatorcontrib>Niu, Shuaicheng</creatorcontrib><creatorcontrib>Wu, Jiaxiang</creatorcontrib><creatorcontrib>Zhang, Yifan</creatorcontrib><creatorcontrib>Wen, Zhiquan</creatorcontrib><creatorcontrib>Chen, Yaofo</creatorcontrib><creatorcontrib>Zhao, Peilin</creatorcontrib><creatorcontrib>Tan, Mingkui</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Niu, Shuaicheng</au><au>Wu, Jiaxiang</au><au>Zhang, Yifan</au><au>Wen, Zhiquan</au><au>Chen, Yaofo</au><au>Zhao, Peilin</au><au>Tan, Mingkui</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Towards Stable Test-Time Adaptation in Dynamic Wild World</atitle><jtitle>arXiv.org</jtitle><date>2023-02-24</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples. However, the online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world. Specifically, TTA may fail to improve or even harm the model performance when test data have: 1) mixed distribution shifts, 2) small batch sizes, and 3) online imbalanced label distribution shifts, which are quite common in practice. In this paper, we investigate the unstable reasons and find that the batch norm layer is a crucial factor hindering TTA stability. Conversely, TTA can perform more stably with batch-agnostic norm layers, \ie, group or layer norm. However, we observe that TTA with group and layer norms does not always succeed and still suffers many failure cases. By digging into the failure cases, we find that certain noisy test samples with large gradients may disturb the model adaption and result in collapsed trivial solutions, \ie, assigning the same class label for all samples. To address the above collapse issue, we propose a sharpness-aware and reliable entropy minimization method, called SAR, for further stabilizing TTA from two aspects: 1) remove partial noisy samples with large gradients, 2) encourage model weights to go to a flat minimum so that the model is robust to the remaining noisy samples. Promising results demonstrate that SAR performs more stably over prior methods and is computationally efficient under the above wild test scenarios.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-02 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2780248585 |
source | Free E- Journals |
subjects | Adaptation Model testing Model updating Norms Testing time |
title | Towards Stable Test-Time Adaptation in Dynamic Wild World |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T00%3A06%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Towards%20Stable%20Test-Time%20Adaptation%20in%20Dynamic%20Wild%20World&rft.jtitle=arXiv.org&rft.au=Niu,%20Shuaicheng&rft.date=2023-02-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2780248585%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2780248585&rft_id=info:pmid/&rfr_iscdi=true |