Producing Plankton Classifiers that are Robust to Dataset Shift

Modern plankton high-throughput monitoring relies on deep learning classifiers for species recognition in water ecosystems. Despite satisfactory nominal performances, a significant challenge arises from Dataset Shift, which causes performances to drop during deployment. In our study, we integrate th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-01
Hauptverfasser: Chen, Cheng, Kyathanahally, Sreenath, Reyes, Marta, Merkli, Stefanie, Merz, Ewa, Francazi, Emanuele, Hoege, Marvin, Pomati, Francesco, Baity-Jesi, Marco
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Chen, Cheng
Kyathanahally, Sreenath
Reyes, Marta
Merkli, Stefanie
Merz, Ewa
Francazi, Emanuele
Hoege, Marvin
Pomati, Francesco
Baity-Jesi, Marco
description Modern plankton high-throughput monitoring relies on deep learning classifiers for species recognition in water ecosystems. Despite satisfactory nominal performances, a significant challenge arises from Dataset Shift, which causes performances to drop during deployment. In our study, we integrate the ZooLake dataset with manually-annotated images from 10 independent days of deployment, serving as test cells to benchmark Out-Of-Dataset (OOD) performances. Our analysis reveals instances where classifiers, initially performing well in In-Dataset conditions, encounter notable failures in practical scenarios. For example, a MobileNet with a 92% nominal test accuracy shows a 77% OOD accuracy. We systematically investigate conditions leading to OOD performance drops and propose a preemptive assessment method to identify potential pitfalls when classifying new data, and pinpoint features in OOD images that adversely impact classification. We present a three-step pipeline: (i) identifying OOD degradation compared to nominal test performance, (ii) conducting a diagnostic analysis of degradation causes, and (iii) providing solutions. We find that ensembles of BEiT vision transformers, with targeted augmentations addressing OOD robustness, geometric ensembling, and rotation-based test-time augmentation, constitute the most robust model, which we call BEsT model. It achieves an 83% OOD accuracy, with errors concentrated on container classes. Moreover, it exhibits lower sensitivity to dataset shift, and reproduces well the plankton abundances. Our proposed pipeline is applicable to generic plankton classifiers, contingent on the availability of suitable test cells. By identifying critical shortcomings and offering practical procedures to fortify models against dataset shift, our study contributes to the development of more reliable plankton classification technologies.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2918649098</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2918649098</sourcerecordid><originalsourceid>FETCH-proquest_journals_29186490983</originalsourceid><addsrcrecordid>eNqNyrEOgjAUQNHGxESi_MNLnElKCwiTA2ocibqTqq0USat9r_-vgx_gdIdzZywRUuZZXQixYCniyDkX1UaUpUzYtgv-Hm_WPaCblHuSd9BOCtEaqwMCDYpABQ0nf41IQB52ihRqgvNgDa3Y3KgJdfrrkq0P-0t7zF7Bv6NG6kcfg_tSL5q8roqGN7X87_oA93Q4rQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918649098</pqid></control><display><type>article</type><title>Producing Plankton Classifiers that are Robust to Dataset Shift</title><source>Free E- Journals</source><creator>Chen, Cheng ; Kyathanahally, Sreenath ; Reyes, Marta ; Merkli, Stefanie ; Merz, Ewa ; Francazi, Emanuele ; Hoege, Marvin ; Pomati, Francesco ; Baity-Jesi, Marco</creator><creatorcontrib>Chen, Cheng ; Kyathanahally, Sreenath ; Reyes, Marta ; Merkli, Stefanie ; Merz, Ewa ; Francazi, Emanuele ; Hoege, Marvin ; Pomati, Francesco ; Baity-Jesi, Marco</creatorcontrib><description>Modern plankton high-throughput monitoring relies on deep learning classifiers for species recognition in water ecosystems. Despite satisfactory nominal performances, a significant challenge arises from Dataset Shift, which causes performances to drop during deployment. In our study, we integrate the ZooLake dataset with manually-annotated images from 10 independent days of deployment, serving as test cells to benchmark Out-Of-Dataset (OOD) performances. Our analysis reveals instances where classifiers, initially performing well in In-Dataset conditions, encounter notable failures in practical scenarios. For example, a MobileNet with a 92% nominal test accuracy shows a 77% OOD accuracy. We systematically investigate conditions leading to OOD performance drops and propose a preemptive assessment method to identify potential pitfalls when classifying new data, and pinpoint features in OOD images that adversely impact classification. We present a three-step pipeline: (i) identifying OOD degradation compared to nominal test performance, (ii) conducting a diagnostic analysis of degradation causes, and (iii) providing solutions. We find that ensembles of BEiT vision transformers, with targeted augmentations addressing OOD robustness, geometric ensembling, and rotation-based test-time augmentation, constitute the most robust model, which we call BEsT model. It achieves an 83% OOD accuracy, with errors concentrated on container classes. Moreover, it exhibits lower sensitivity to dataset shift, and reproduces well the plankton abundances. Our proposed pipeline is applicable to generic plankton classifiers, contingent on the availability of suitable test cells. By identifying critical shortcomings and offering practical procedures to fortify models against dataset shift, our study contributes to the development of more reliable plankton classification technologies.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Classification ; Classifiers ; Datasets ; Degradation ; Identification methods ; Machine learning ; Plankton ; Preempting ; Robustness ; Testing time</subject><ispartof>arXiv.org, 2024-01</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Chen, Cheng</creatorcontrib><creatorcontrib>Kyathanahally, Sreenath</creatorcontrib><creatorcontrib>Reyes, Marta</creatorcontrib><creatorcontrib>Merkli, Stefanie</creatorcontrib><creatorcontrib>Merz, Ewa</creatorcontrib><creatorcontrib>Francazi, Emanuele</creatorcontrib><creatorcontrib>Hoege, Marvin</creatorcontrib><creatorcontrib>Pomati, Francesco</creatorcontrib><creatorcontrib>Baity-Jesi, Marco</creatorcontrib><title>Producing Plankton Classifiers that are Robust to Dataset Shift</title><title>arXiv.org</title><description>Modern plankton high-throughput monitoring relies on deep learning classifiers for species recognition in water ecosystems. Despite satisfactory nominal performances, a significant challenge arises from Dataset Shift, which causes performances to drop during deployment. In our study, we integrate the ZooLake dataset with manually-annotated images from 10 independent days of deployment, serving as test cells to benchmark Out-Of-Dataset (OOD) performances. Our analysis reveals instances where classifiers, initially performing well in In-Dataset conditions, encounter notable failures in practical scenarios. For example, a MobileNet with a 92% nominal test accuracy shows a 77% OOD accuracy. We systematically investigate conditions leading to OOD performance drops and propose a preemptive assessment method to identify potential pitfalls when classifying new data, and pinpoint features in OOD images that adversely impact classification. We present a three-step pipeline: (i) identifying OOD degradation compared to nominal test performance, (ii) conducting a diagnostic analysis of degradation causes, and (iii) providing solutions. We find that ensembles of BEiT vision transformers, with targeted augmentations addressing OOD robustness, geometric ensembling, and rotation-based test-time augmentation, constitute the most robust model, which we call BEsT model. It achieves an 83% OOD accuracy, with errors concentrated on container classes. Moreover, it exhibits lower sensitivity to dataset shift, and reproduces well the plankton abundances. Our proposed pipeline is applicable to generic plankton classifiers, contingent on the availability of suitable test cells. By identifying critical shortcomings and offering practical procedures to fortify models against dataset shift, our study contributes to the development of more reliable plankton classification technologies.</description><subject>Accuracy</subject><subject>Classification</subject><subject>Classifiers</subject><subject>Datasets</subject><subject>Degradation</subject><subject>Identification methods</subject><subject>Machine learning</subject><subject>Plankton</subject><subject>Preempting</subject><subject>Robustness</subject><subject>Testing time</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyrEOgjAUQNHGxESi_MNLnElKCwiTA2ocibqTqq0USat9r_-vgx_gdIdzZywRUuZZXQixYCniyDkX1UaUpUzYtgv-Hm_WPaCblHuSd9BOCtEaqwMCDYpABQ0nf41IQB52ihRqgvNgDa3Y3KgJdfrrkq0P-0t7zF7Bv6NG6kcfg_tSL5q8roqGN7X87_oA93Q4rQ</recordid><startdate>20240125</startdate><enddate>20240125</enddate><creator>Chen, Cheng</creator><creator>Kyathanahally, Sreenath</creator><creator>Reyes, Marta</creator><creator>Merkli, Stefanie</creator><creator>Merz, Ewa</creator><creator>Francazi, Emanuele</creator><creator>Hoege, Marvin</creator><creator>Pomati, Francesco</creator><creator>Baity-Jesi, Marco</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240125</creationdate><title>Producing Plankton Classifiers that are Robust to Dataset Shift</title><author>Chen, Cheng ; Kyathanahally, Sreenath ; Reyes, Marta ; Merkli, Stefanie ; Merz, Ewa ; Francazi, Emanuele ; Hoege, Marvin ; Pomati, Francesco ; Baity-Jesi, Marco</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29186490983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Classification</topic><topic>Classifiers</topic><topic>Datasets</topic><topic>Degradation</topic><topic>Identification methods</topic><topic>Machine learning</topic><topic>Plankton</topic><topic>Preempting</topic><topic>Robustness</topic><topic>Testing time</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Cheng</creatorcontrib><creatorcontrib>Kyathanahally, Sreenath</creatorcontrib><creatorcontrib>Reyes, Marta</creatorcontrib><creatorcontrib>Merkli, Stefanie</creatorcontrib><creatorcontrib>Merz, Ewa</creatorcontrib><creatorcontrib>Francazi, Emanuele</creatorcontrib><creatorcontrib>Hoege, Marvin</creatorcontrib><creatorcontrib>Pomati, Francesco</creatorcontrib><creatorcontrib>Baity-Jesi, Marco</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Cheng</au><au>Kyathanahally, Sreenath</au><au>Reyes, Marta</au><au>Merkli, Stefanie</au><au>Merz, Ewa</au><au>Francazi, Emanuele</au><au>Hoege, Marvin</au><au>Pomati, Francesco</au><au>Baity-Jesi, Marco</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Producing Plankton Classifiers that are Robust to Dataset Shift</atitle><jtitle>arXiv.org</jtitle><date>2024-01-25</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Modern plankton high-throughput monitoring relies on deep learning classifiers for species recognition in water ecosystems. Despite satisfactory nominal performances, a significant challenge arises from Dataset Shift, which causes performances to drop during deployment. In our study, we integrate the ZooLake dataset with manually-annotated images from 10 independent days of deployment, serving as test cells to benchmark Out-Of-Dataset (OOD) performances. Our analysis reveals instances where classifiers, initially performing well in In-Dataset conditions, encounter notable failures in practical scenarios. For example, a MobileNet with a 92% nominal test accuracy shows a 77% OOD accuracy. We systematically investigate conditions leading to OOD performance drops and propose a preemptive assessment method to identify potential pitfalls when classifying new data, and pinpoint features in OOD images that adversely impact classification. We present a three-step pipeline: (i) identifying OOD degradation compared to nominal test performance, (ii) conducting a diagnostic analysis of degradation causes, and (iii) providing solutions. We find that ensembles of BEiT vision transformers, with targeted augmentations addressing OOD robustness, geometric ensembling, and rotation-based test-time augmentation, constitute the most robust model, which we call BEsT model. It achieves an 83% OOD accuracy, with errors concentrated on container classes. Moreover, it exhibits lower sensitivity to dataset shift, and reproduces well the plankton abundances. Our proposed pipeline is applicable to generic plankton classifiers, contingent on the availability of suitable test cells. By identifying critical shortcomings and offering practical procedures to fortify models against dataset shift, our study contributes to the development of more reliable plankton classification technologies.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-01
issn 2331-8422
language eng
recordid cdi_proquest_journals_2918649098
source Free E- Journals
subjects Accuracy
Classification
Classifiers
Datasets
Degradation
Identification methods
Machine learning
Plankton
Preempting
Robustness
Testing time
title Producing Plankton Classifiers that are Robust to Dataset Shift
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T19%3A49%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Producing%20Plankton%20Classifiers%20that%20are%20Robust%20to%20Dataset%20Shift&rft.jtitle=arXiv.org&rft.au=Chen,%20Cheng&rft.date=2024-01-25&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2918649098%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918649098&rft_id=info:pmid/&rfr_iscdi=true