Not as simple as we thought: a rigorous examination of data aggregation in materials informatics

Recent Machine Learning (ML) developments have opened new perspectives on accelerating the discovery of new materials. However, in the field of materials informatics, the performance of ML estimators is heavily limited by the nature of the available training datasets, which are often severely restri...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Digital discovery 2024-02, Vol.3 (2), p.337-346
Hauptverfasser: Ottomano, Federico, De Felice, Giovanni, Gusev, Vladimir V, Sparks, Taylor D
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 346
container_issue 2
container_start_page 337
container_title Digital discovery
container_volume 3
creator Ottomano, Federico
De Felice, Giovanni
Gusev, Vladimir V
Sparks, Taylor D
description Recent Machine Learning (ML) developments have opened new perspectives on accelerating the discovery of new materials. However, in the field of materials informatics, the performance of ML estimators is heavily limited by the nature of the available training datasets, which are often severely restricted and unbalanced. Among practitioners, it is usually taken for granted that more data corresponds to better performance. Here, we investigate whether different ML models for property predictions benefit from the aggregation of large databases into smaller repositories. To do this, we probe three different aggregation strategies prioritizing training size, element diversity, and composition diversity. For classic ML models, our results consistently show a reduction in performance under all the considered strategies. Deep Learning models show more robustness, but most changes are not significant. Furthermore, to assess whether this is a consequence of a distribution mismatch between datasets, we simulate the data acquisition process of a single dataset and compare a random selection with prioritizing chemical diversity. We observe that prioritizing composition diversity generally leads to a slower convergence toward better accuracy. Overall, our results suggest caution when merging different data sources and discourage a biased acquisition of novel chemistries when building a training dataset. Prompted by limited available data, we explore data-aggregation strategies for material datasets, aiming to boost machine learning performance. Our findings suggest that intuitive aggregation schemes are ineffective in enhancing predictive accuracy.
doi_str_mv 10.1039/d3dd00207a
format Article
fullrecord <record><control><sourceid>rsc_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1039_D3DD00207A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>d3dd00207a</sourcerecordid><originalsourceid>FETCH-LOGICAL-c248t-b766e08196e85dab039f84b5c32f36d0d0314b7b3e994bc462aead9565f7f2ae3</originalsourceid><addsrcrecordid>eNpNkM9LwzAcxYMoOOYu3oWchWrStEnjbWz-gqEXBW_12ybpImszkgz1vzezop6-7z0-PPg-hE4puaCEyUvFlCIkJwIO0CTnrMyIrF4O_-ljNAvhjSRICEoZn6DXBxcxBBxsv93ovXrXOK7drlvHKwzY2855twtYf0BvB4jWDdgZrCAChq7zuhszO-AeovYWNiEZ43yytg0n6MikSM9-7hQ931w_Le6y1ePt_WK-ytq8qGLWCM41qajkuioVNOkfUxVN2bLcMK6IIowWjWiYlrJo2oLnoEHJkpdGmKTZFJ2Pva13IXht6q23PfjPmpJ6P0-9ZMvl9zzzBJ-NsA_tL_c3H_sCIeFi9g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Not as simple as we thought: a rigorous examination of data aggregation in materials informatics</title><source>DOAJ Directory of Open Access Journals</source><creator>Ottomano, Federico ; De Felice, Giovanni ; Gusev, Vladimir V ; Sparks, Taylor D</creator><creatorcontrib>Ottomano, Federico ; De Felice, Giovanni ; Gusev, Vladimir V ; Sparks, Taylor D</creatorcontrib><description>Recent Machine Learning (ML) developments have opened new perspectives on accelerating the discovery of new materials. However, in the field of materials informatics, the performance of ML estimators is heavily limited by the nature of the available training datasets, which are often severely restricted and unbalanced. Among practitioners, it is usually taken for granted that more data corresponds to better performance. Here, we investigate whether different ML models for property predictions benefit from the aggregation of large databases into smaller repositories. To do this, we probe three different aggregation strategies prioritizing training size, element diversity, and composition diversity. For classic ML models, our results consistently show a reduction in performance under all the considered strategies. Deep Learning models show more robustness, but most changes are not significant. Furthermore, to assess whether this is a consequence of a distribution mismatch between datasets, we simulate the data acquisition process of a single dataset and compare a random selection with prioritizing chemical diversity. We observe that prioritizing composition diversity generally leads to a slower convergence toward better accuracy. Overall, our results suggest caution when merging different data sources and discourage a biased acquisition of novel chemistries when building a training dataset. Prompted by limited available data, we explore data-aggregation strategies for material datasets, aiming to boost machine learning performance. Our findings suggest that intuitive aggregation schemes are ineffective in enhancing predictive accuracy.</description><identifier>ISSN: 2635-098X</identifier><identifier>EISSN: 2635-098X</identifier><identifier>DOI: 10.1039/d3dd00207a</identifier><language>eng</language><ispartof>Digital discovery, 2024-02, Vol.3 (2), p.337-346</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c248t-b766e08196e85dab039f84b5c32f36d0d0314b7b3e994bc462aead9565f7f2ae3</cites><orcidid>0009-0009-9005-5948 ; 0000-0001-8020-7711 ; 0000-0002-2815-607X ; 0000-0002-8550-718X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,860,27901,27902</link.rule.ids></links><search><creatorcontrib>Ottomano, Federico</creatorcontrib><creatorcontrib>De Felice, Giovanni</creatorcontrib><creatorcontrib>Gusev, Vladimir V</creatorcontrib><creatorcontrib>Sparks, Taylor D</creatorcontrib><title>Not as simple as we thought: a rigorous examination of data aggregation in materials informatics</title><title>Digital discovery</title><description>Recent Machine Learning (ML) developments have opened new perspectives on accelerating the discovery of new materials. However, in the field of materials informatics, the performance of ML estimators is heavily limited by the nature of the available training datasets, which are often severely restricted and unbalanced. Among practitioners, it is usually taken for granted that more data corresponds to better performance. Here, we investigate whether different ML models for property predictions benefit from the aggregation of large databases into smaller repositories. To do this, we probe three different aggregation strategies prioritizing training size, element diversity, and composition diversity. For classic ML models, our results consistently show a reduction in performance under all the considered strategies. Deep Learning models show more robustness, but most changes are not significant. Furthermore, to assess whether this is a consequence of a distribution mismatch between datasets, we simulate the data acquisition process of a single dataset and compare a random selection with prioritizing chemical diversity. We observe that prioritizing composition diversity generally leads to a slower convergence toward better accuracy. Overall, our results suggest caution when merging different data sources and discourage a biased acquisition of novel chemistries when building a training dataset. Prompted by limited available data, we explore data-aggregation strategies for material datasets, aiming to boost machine learning performance. Our findings suggest that intuitive aggregation schemes are ineffective in enhancing predictive accuracy.</description><issn>2635-098X</issn><issn>2635-098X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkM9LwzAcxYMoOOYu3oWchWrStEnjbWz-gqEXBW_12ybpImszkgz1vzezop6-7z0-PPg-hE4puaCEyUvFlCIkJwIO0CTnrMyIrF4O_-ljNAvhjSRICEoZn6DXBxcxBBxsv93ovXrXOK7drlvHKwzY2855twtYf0BvB4jWDdgZrCAChq7zuhszO-AeovYWNiEZ43yytg0n6MikSM9-7hQ931w_Le6y1ePt_WK-ytq8qGLWCM41qajkuioVNOkfUxVN2bLcMK6IIowWjWiYlrJo2oLnoEHJkpdGmKTZFJ2Pva13IXht6q23PfjPmpJ6P0-9ZMvl9zzzBJ-NsA_tL_c3H_sCIeFi9g</recordid><startdate>20240214</startdate><enddate>20240214</enddate><creator>Ottomano, Federico</creator><creator>De Felice, Giovanni</creator><creator>Gusev, Vladimir V</creator><creator>Sparks, Taylor D</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0009-0009-9005-5948</orcidid><orcidid>https://orcid.org/0000-0001-8020-7711</orcidid><orcidid>https://orcid.org/0000-0002-2815-607X</orcidid><orcidid>https://orcid.org/0000-0002-8550-718X</orcidid></search><sort><creationdate>20240214</creationdate><title>Not as simple as we thought: a rigorous examination of data aggregation in materials informatics</title><author>Ottomano, Federico ; De Felice, Giovanni ; Gusev, Vladimir V ; Sparks, Taylor D</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c248t-b766e08196e85dab039f84b5c32f36d0d0314b7b3e994bc462aead9565f7f2ae3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ottomano, Federico</creatorcontrib><creatorcontrib>De Felice, Giovanni</creatorcontrib><creatorcontrib>Gusev, Vladimir V</creatorcontrib><creatorcontrib>Sparks, Taylor D</creatorcontrib><collection>CrossRef</collection><jtitle>Digital discovery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ottomano, Federico</au><au>De Felice, Giovanni</au><au>Gusev, Vladimir V</au><au>Sparks, Taylor D</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Not as simple as we thought: a rigorous examination of data aggregation in materials informatics</atitle><jtitle>Digital discovery</jtitle><date>2024-02-14</date><risdate>2024</risdate><volume>3</volume><issue>2</issue><spage>337</spage><epage>346</epage><pages>337-346</pages><issn>2635-098X</issn><eissn>2635-098X</eissn><abstract>Recent Machine Learning (ML) developments have opened new perspectives on accelerating the discovery of new materials. However, in the field of materials informatics, the performance of ML estimators is heavily limited by the nature of the available training datasets, which are often severely restricted and unbalanced. Among practitioners, it is usually taken for granted that more data corresponds to better performance. Here, we investigate whether different ML models for property predictions benefit from the aggregation of large databases into smaller repositories. To do this, we probe three different aggregation strategies prioritizing training size, element diversity, and composition diversity. For classic ML models, our results consistently show a reduction in performance under all the considered strategies. Deep Learning models show more robustness, but most changes are not significant. Furthermore, to assess whether this is a consequence of a distribution mismatch between datasets, we simulate the data acquisition process of a single dataset and compare a random selection with prioritizing chemical diversity. We observe that prioritizing composition diversity generally leads to a slower convergence toward better accuracy. Overall, our results suggest caution when merging different data sources and discourage a biased acquisition of novel chemistries when building a training dataset. Prompted by limited available data, we explore data-aggregation strategies for material datasets, aiming to boost machine learning performance. Our findings suggest that intuitive aggregation schemes are ineffective in enhancing predictive accuracy.</abstract><doi>10.1039/d3dd00207a</doi><tpages>1</tpages><orcidid>https://orcid.org/0009-0009-9005-5948</orcidid><orcidid>https://orcid.org/0000-0001-8020-7711</orcidid><orcidid>https://orcid.org/0000-0002-2815-607X</orcidid><orcidid>https://orcid.org/0000-0002-8550-718X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2635-098X
ispartof Digital discovery, 2024-02, Vol.3 (2), p.337-346
issn 2635-098X
2635-098X
language eng
recordid cdi_crossref_primary_10_1039_D3DD00207A
source DOAJ Directory of Open Access Journals
title Not as simple as we thought: a rigorous examination of data aggregation in materials informatics
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T09%3A36%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-rsc_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Not%20as%20simple%20as%20we%20thought:%20a%20rigorous%20examination%20of%20data%20aggregation%20in%20materials%20informatics&rft.jtitle=Digital%20discovery&rft.au=Ottomano,%20Federico&rft.date=2024-02-14&rft.volume=3&rft.issue=2&rft.spage=337&rft.epage=346&rft.pages=337-346&rft.issn=2635-098X&rft.eissn=2635-098X&rft_id=info:doi/10.1039/d3dd00207a&rft_dat=%3Crsc_cross%3Ed3dd00207a%3C/rsc_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true