A Deep Learning Approach for Segmentation, Classification, and Visualization of 3-D High-Frequency Ultrasound Images of Mouse Embryos

Segmentation and mutant classification of high-frequency ultrasound (HFU) mouse embryo brain ventricle (BV) and body images can provide valuable information for developmental biologists. However, manual segmentation and identification of BV and body requires substantial time and expertise. This arti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on ultrasonics, ferroelectrics, and frequency control ferroelectrics, and frequency control, 2021-07, Vol.68 (7), p.2460-2471
Hauptverfasser: Qiu, Ziming, Xu, Tongda, Langerman, Jack, Das, William, Wang, Chuiyu, Nair, Nitin, Aristizabal, Orlando, Mamou, Jonathan, Turnbull, Daniel H., Ketterling, Jeffrey A., Wang, Yao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2471
container_issue 7
container_start_page 2460
container_title IEEE transactions on ultrasonics, ferroelectrics, and frequency control
container_volume 68
creator Qiu, Ziming
Xu, Tongda
Langerman, Jack
Das, William
Wang, Chuiyu
Nair, Nitin
Aristizabal, Orlando
Mamou, Jonathan
Turnbull, Daniel H.
Ketterling, Jeffrey A.
Wang, Yao
description Segmentation and mutant classification of high-frequency ultrasound (HFU) mouse embryo brain ventricle (BV) and body images can provide valuable information for developmental biologists. However, manual segmentation and identification of BV and body requires substantial time and expertise. This article proposes an accurate, efficient and explainable deep learning pipeline for automatic segmentation and classification of the BV and body. For segmentation, a two-stage framework is implemented. The first stage produces a low-resolution segmentation map, which is then used to crop a region of interest (ROI) around the target object and serve as the probability map of the autocontext input for the second-stage fine-resolution refinement network. The segmentation then becomes tractable on high-resolution 3-D images without time-consuming sliding windows. The proposed segmentation method significantly reduces inference time (102.36-0.09 s/volume \approx 1000\times faster) while maintaining high accuracy comparable to previous sliding-window approaches. Based on the BV and body segmentation map, a volumetric convolutional neural network (CNN) is trained to perform a mutant classification task. Through backpropagating the gradients of the predictions to the input BV and body segmentation map, the trained classifier is found to largely focus on the region where the Engrailed-1 (En1) mutation phenotype is known to manifest itself. This suggests that gradient backpropagation of deep learning classifiers may provide a powerful tool for automatically detecting unknown phenotypes associated with a known genetic mutation.
doi_str_mv 10.1109/TUFFC.2021.3068156
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_33755564</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9383281</ieee_id><sourcerecordid>2546719577</sourcerecordid><originalsourceid>FETCH-LOGICAL-c395t-282672561ed403f9eb8b39f12f5a4b10bb98d14c9bff23fd429735e72abc7a513</originalsourceid><addsrcrecordid>eNpdkU1vEzEQhi1ERUPLHwAJWeqFAxv8ubaPUdrQSkEcaHpd2bvj1NV-BHv3EO78b5wm9NDTyONnRu_oQegjJXNKifl2v1mtlnNGGJ1zUmoqyzdoRiWThTZSvkUzorUsOKHkHL1P6YkQKoRh79A550pKWYoZ-rvA1wA7vAYb-9Bv8WK3i4OtH7EfIv4F2w760Y5h6L_iZWtTCj7Up7ftG_wQ0mTb8Oe5hQePeXGNb8P2sVhF-D1BX-_xph2jTcOU8bvObiEduB_DlADfdC7uh3SJzrxtE3w41Qu0Wd3cL2-L9c_vd8vFuqi5kWPBNCsVkyWFRhDuDTjtuPGUeWmFo8Q5oxsqauO8Z9w3ghnFJShmXa2spPwCfTnuzTfmcGmsupBqaFvbQ85TMUmEUpJwndGrV-jTMMU-p8uUKBU1UqlMsSNVxyGlCL7axdDZuK8oqQ6SqmdJ1UFSdZKUhz6fVk-ug-Zl5L-VDHw6AgEAXr5NTsU05f8AGa-Vwg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2546719577</pqid></control><display><type>article</type><title>A Deep Learning Approach for Segmentation, Classification, and Visualization of 3-D High-Frequency Ultrasound Images of Mouse Embryos</title><source>IEEE Electronic Library (IEL)</source><creator>Qiu, Ziming ; Xu, Tongda ; Langerman, Jack ; Das, William ; Wang, Chuiyu ; Nair, Nitin ; Aristizabal, Orlando ; Mamou, Jonathan ; Turnbull, Daniel H. ; Ketterling, Jeffrey A. ; Wang, Yao</creator><creatorcontrib>Qiu, Ziming ; Xu, Tongda ; Langerman, Jack ; Das, William ; Wang, Chuiyu ; Nair, Nitin ; Aristizabal, Orlando ; Mamou, Jonathan ; Turnbull, Daniel H. ; Ketterling, Jeffrey A. ; Wang, Yao</creatorcontrib><description>Segmentation and mutant classification of high-frequency ultrasound (HFU) mouse embryo brain ventricle (BV) and body images can provide valuable information for developmental biologists. However, manual segmentation and identification of BV and body requires substantial time and expertise. This article proposes an accurate, efficient and explainable deep learning pipeline for automatic segmentation and classification of the BV and body. For segmentation, a two-stage framework is implemented. The first stage produces a low-resolution segmentation map, which is then used to crop a region of interest (ROI) around the target object and serve as the probability map of the autocontext input for the second-stage fine-resolution refinement network. The segmentation then becomes tractable on high-resolution 3-D images without time-consuming sliding windows. The proposed segmentation method significantly reduces inference time (102.36-0.09 s/volume&lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;\approx 1000\times &lt;/tex-math&gt;&lt;/inline-formula&gt; faster) while maintaining high accuracy comparable to previous sliding-window approaches. Based on the BV and body segmentation map, a volumetric convolutional neural network (CNN) is trained to perform a mutant classification task. Through backpropagating the gradients of the predictions to the input BV and body segmentation map, the trained classifier is found to largely focus on the region where the Engrailed-1 (En1) mutation phenotype is known to manifest itself. This suggests that gradient backpropagation of deep learning classifiers may provide a powerful tool for automatically detecting unknown phenotypes associated with a known genetic mutation.</description><identifier>ISSN: 0885-3010</identifier><identifier>ISSN: 1525-8955</identifier><identifier>EISSN: 1525-8955</identifier><identifier>DOI: 10.1109/TUFFC.2021.3068156</identifier><identifier>PMID: 33755564</identifier><identifier>CODEN: ITUCER</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Animals ; Artificial neural networks ; Back propagation ; Back propagation networks ; Biomedical imaging ; Classification ; Classification and visualization ; Classifiers ; Deep Learning ; Embryo ; Embryos ; high-frequency ultrasound (HFU) ; Image classification ; Image Processing, Computer-Assisted ; Image resolution ; Image segmentation ; Imaging, Three-Dimensional ; Location awareness ; Machine learning ; Mice ; mouse embryo ; Mutation ; Neural Networks, Computer ; Sliding ; Three-dimensional displays ; Ultrasonic imaging ; Ultrasonography ; Windows (intervals)</subject><ispartof>IEEE transactions on ultrasonics, ferroelectrics, and frequency control, 2021-07, Vol.68 (7), p.2460-2471</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c395t-282672561ed403f9eb8b39f12f5a4b10bb98d14c9bff23fd429735e72abc7a513</citedby><cites>FETCH-LOGICAL-c395t-282672561ed403f9eb8b39f12f5a4b10bb98d14c9bff23fd429735e72abc7a513</cites><orcidid>0000-0002-1389-7756 ; 0000-0002-9412-165X ; 0000-0001-8675-7343 ; 0000-0002-5328-0443 ; 0000-0003-3199-3802</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9383281$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9383281$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33755564$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Qiu, Ziming</creatorcontrib><creatorcontrib>Xu, Tongda</creatorcontrib><creatorcontrib>Langerman, Jack</creatorcontrib><creatorcontrib>Das, William</creatorcontrib><creatorcontrib>Wang, Chuiyu</creatorcontrib><creatorcontrib>Nair, Nitin</creatorcontrib><creatorcontrib>Aristizabal, Orlando</creatorcontrib><creatorcontrib>Mamou, Jonathan</creatorcontrib><creatorcontrib>Turnbull, Daniel H.</creatorcontrib><creatorcontrib>Ketterling, Jeffrey A.</creatorcontrib><creatorcontrib>Wang, Yao</creatorcontrib><title>A Deep Learning Approach for Segmentation, Classification, and Visualization of 3-D High-Frequency Ultrasound Images of Mouse Embryos</title><title>IEEE transactions on ultrasonics, ferroelectrics, and frequency control</title><addtitle>T-UFFC</addtitle><addtitle>IEEE Trans Ultrason Ferroelectr Freq Control</addtitle><description>Segmentation and mutant classification of high-frequency ultrasound (HFU) mouse embryo brain ventricle (BV) and body images can provide valuable information for developmental biologists. However, manual segmentation and identification of BV and body requires substantial time and expertise. This article proposes an accurate, efficient and explainable deep learning pipeline for automatic segmentation and classification of the BV and body. For segmentation, a two-stage framework is implemented. The first stage produces a low-resolution segmentation map, which is then used to crop a region of interest (ROI) around the target object and serve as the probability map of the autocontext input for the second-stage fine-resolution refinement network. The segmentation then becomes tractable on high-resolution 3-D images without time-consuming sliding windows. The proposed segmentation method significantly reduces inference time (102.36-0.09 s/volume&lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;\approx 1000\times &lt;/tex-math&gt;&lt;/inline-formula&gt; faster) while maintaining high accuracy comparable to previous sliding-window approaches. Based on the BV and body segmentation map, a volumetric convolutional neural network (CNN) is trained to perform a mutant classification task. Through backpropagating the gradients of the predictions to the input BV and body segmentation map, the trained classifier is found to largely focus on the region where the Engrailed-1 (En1) mutation phenotype is known to manifest itself. This suggests that gradient backpropagation of deep learning classifiers may provide a powerful tool for automatically detecting unknown phenotypes associated with a known genetic mutation.</description><subject>Animals</subject><subject>Artificial neural networks</subject><subject>Back propagation</subject><subject>Back propagation networks</subject><subject>Biomedical imaging</subject><subject>Classification</subject><subject>Classification and visualization</subject><subject>Classifiers</subject><subject>Deep Learning</subject><subject>Embryo</subject><subject>Embryos</subject><subject>high-frequency ultrasound (HFU)</subject><subject>Image classification</subject><subject>Image Processing, Computer-Assisted</subject><subject>Image resolution</subject><subject>Image segmentation</subject><subject>Imaging, Three-Dimensional</subject><subject>Location awareness</subject><subject>Machine learning</subject><subject>Mice</subject><subject>mouse embryo</subject><subject>Mutation</subject><subject>Neural Networks, Computer</subject><subject>Sliding</subject><subject>Three-dimensional displays</subject><subject>Ultrasonic imaging</subject><subject>Ultrasonography</subject><subject>Windows (intervals)</subject><issn>0885-3010</issn><issn>1525-8955</issn><issn>1525-8955</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkU1vEzEQhi1ERUPLHwAJWeqFAxv8ubaPUdrQSkEcaHpd2bvj1NV-BHv3EO78b5wm9NDTyONnRu_oQegjJXNKifl2v1mtlnNGGJ1zUmoqyzdoRiWThTZSvkUzorUsOKHkHL1P6YkQKoRh79A550pKWYoZ-rvA1wA7vAYb-9Bv8WK3i4OtH7EfIv4F2w760Y5h6L_iZWtTCj7Up7ftG_wQ0mTb8Oe5hQePeXGNb8P2sVhF-D1BX-_xph2jTcOU8bvObiEduB_DlADfdC7uh3SJzrxtE3w41Qu0Wd3cL2-L9c_vd8vFuqi5kWPBNCsVkyWFRhDuDTjtuPGUeWmFo8Q5oxsqauO8Z9w3ghnFJShmXa2spPwCfTnuzTfmcGmsupBqaFvbQ85TMUmEUpJwndGrV-jTMMU-p8uUKBU1UqlMsSNVxyGlCL7axdDZuK8oqQ6SqmdJ1UFSdZKUhz6fVk-ug-Zl5L-VDHw6AgEAXr5NTsU05f8AGa-Vwg</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Qiu, Ziming</creator><creator>Xu, Tongda</creator><creator>Langerman, Jack</creator><creator>Das, William</creator><creator>Wang, Chuiyu</creator><creator>Nair, Nitin</creator><creator>Aristizabal, Orlando</creator><creator>Mamou, Jonathan</creator><creator>Turnbull, Daniel H.</creator><creator>Ketterling, Jeffrey A.</creator><creator>Wang, Yao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>L7M</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1389-7756</orcidid><orcidid>https://orcid.org/0000-0002-9412-165X</orcidid><orcidid>https://orcid.org/0000-0001-8675-7343</orcidid><orcidid>https://orcid.org/0000-0002-5328-0443</orcidid><orcidid>https://orcid.org/0000-0003-3199-3802</orcidid></search><sort><creationdate>20210701</creationdate><title>A Deep Learning Approach for Segmentation, Classification, and Visualization of 3-D High-Frequency Ultrasound Images of Mouse Embryos</title><author>Qiu, Ziming ; Xu, Tongda ; Langerman, Jack ; Das, William ; Wang, Chuiyu ; Nair, Nitin ; Aristizabal, Orlando ; Mamou, Jonathan ; Turnbull, Daniel H. ; Ketterling, Jeffrey A. ; Wang, Yao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c395t-282672561ed403f9eb8b39f12f5a4b10bb98d14c9bff23fd429735e72abc7a513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Animals</topic><topic>Artificial neural networks</topic><topic>Back propagation</topic><topic>Back propagation networks</topic><topic>Biomedical imaging</topic><topic>Classification</topic><topic>Classification and visualization</topic><topic>Classifiers</topic><topic>Deep Learning</topic><topic>Embryo</topic><topic>Embryos</topic><topic>high-frequency ultrasound (HFU)</topic><topic>Image classification</topic><topic>Image Processing, Computer-Assisted</topic><topic>Image resolution</topic><topic>Image segmentation</topic><topic>Imaging, Three-Dimensional</topic><topic>Location awareness</topic><topic>Machine learning</topic><topic>Mice</topic><topic>mouse embryo</topic><topic>Mutation</topic><topic>Neural Networks, Computer</topic><topic>Sliding</topic><topic>Three-dimensional displays</topic><topic>Ultrasonic imaging</topic><topic>Ultrasonography</topic><topic>Windows (intervals)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Qiu, Ziming</creatorcontrib><creatorcontrib>Xu, Tongda</creatorcontrib><creatorcontrib>Langerman, Jack</creatorcontrib><creatorcontrib>Das, William</creatorcontrib><creatorcontrib>Wang, Chuiyu</creatorcontrib><creatorcontrib>Nair, Nitin</creatorcontrib><creatorcontrib>Aristizabal, Orlando</creatorcontrib><creatorcontrib>Mamou, Jonathan</creatorcontrib><creatorcontrib>Turnbull, Daniel H.</creatorcontrib><creatorcontrib>Ketterling, Jeffrey A.</creatorcontrib><creatorcontrib>Wang, Yao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on ultrasonics, ferroelectrics, and frequency control</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qiu, Ziming</au><au>Xu, Tongda</au><au>Langerman, Jack</au><au>Das, William</au><au>Wang, Chuiyu</au><au>Nair, Nitin</au><au>Aristizabal, Orlando</au><au>Mamou, Jonathan</au><au>Turnbull, Daniel H.</au><au>Ketterling, Jeffrey A.</au><au>Wang, Yao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Deep Learning Approach for Segmentation, Classification, and Visualization of 3-D High-Frequency Ultrasound Images of Mouse Embryos</atitle><jtitle>IEEE transactions on ultrasonics, ferroelectrics, and frequency control</jtitle><stitle>T-UFFC</stitle><addtitle>IEEE Trans Ultrason Ferroelectr Freq Control</addtitle><date>2021-07-01</date><risdate>2021</risdate><volume>68</volume><issue>7</issue><spage>2460</spage><epage>2471</epage><pages>2460-2471</pages><issn>0885-3010</issn><issn>1525-8955</issn><eissn>1525-8955</eissn><coden>ITUCER</coden><abstract>Segmentation and mutant classification of high-frequency ultrasound (HFU) mouse embryo brain ventricle (BV) and body images can provide valuable information for developmental biologists. However, manual segmentation and identification of BV and body requires substantial time and expertise. This article proposes an accurate, efficient and explainable deep learning pipeline for automatic segmentation and classification of the BV and body. For segmentation, a two-stage framework is implemented. The first stage produces a low-resolution segmentation map, which is then used to crop a region of interest (ROI) around the target object and serve as the probability map of the autocontext input for the second-stage fine-resolution refinement network. The segmentation then becomes tractable on high-resolution 3-D images without time-consuming sliding windows. The proposed segmentation method significantly reduces inference time (102.36-0.09 s/volume&lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;\approx 1000\times &lt;/tex-math&gt;&lt;/inline-formula&gt; faster) while maintaining high accuracy comparable to previous sliding-window approaches. Based on the BV and body segmentation map, a volumetric convolutional neural network (CNN) is trained to perform a mutant classification task. Through backpropagating the gradients of the predictions to the input BV and body segmentation map, the trained classifier is found to largely focus on the region where the Engrailed-1 (En1) mutation phenotype is known to manifest itself. This suggests that gradient backpropagation of deep learning classifiers may provide a powerful tool for automatically detecting unknown phenotypes associated with a known genetic mutation.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>33755564</pmid><doi>10.1109/TUFFC.2021.3068156</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-1389-7756</orcidid><orcidid>https://orcid.org/0000-0002-9412-165X</orcidid><orcidid>https://orcid.org/0000-0001-8675-7343</orcidid><orcidid>https://orcid.org/0000-0002-5328-0443</orcidid><orcidid>https://orcid.org/0000-0003-3199-3802</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0885-3010
ispartof IEEE transactions on ultrasonics, ferroelectrics, and frequency control, 2021-07, Vol.68 (7), p.2460-2471
issn 0885-3010
1525-8955
1525-8955
language eng
recordid cdi_pubmed_primary_33755564
source IEEE Electronic Library (IEL)
subjects Animals
Artificial neural networks
Back propagation
Back propagation networks
Biomedical imaging
Classification
Classification and visualization
Classifiers
Deep Learning
Embryo
Embryos
high-frequency ultrasound (HFU)
Image classification
Image Processing, Computer-Assisted
Image resolution
Image segmentation
Imaging, Three-Dimensional
Location awareness
Machine learning
Mice
mouse embryo
Mutation
Neural Networks, Computer
Sliding
Three-dimensional displays
Ultrasonic imaging
Ultrasonography
Windows (intervals)
title A Deep Learning Approach for Segmentation, Classification, and Visualization of 3-D High-Frequency Ultrasound Images of Mouse Embryos
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T22%3A18%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Deep%20Learning%20Approach%20for%20Segmentation,%20Classification,%20and%20Visualization%20of%203-D%20High-Frequency%20Ultrasound%20Images%20of%20Mouse%20Embryos&rft.jtitle=IEEE%20transactions%20on%20ultrasonics,%20ferroelectrics,%20and%20frequency%20control&rft.au=Qiu,%20Ziming&rft.date=2021-07-01&rft.volume=68&rft.issue=7&rft.spage=2460&rft.epage=2471&rft.pages=2460-2471&rft.issn=0885-3010&rft.eissn=1525-8955&rft.coden=ITUCER&rft_id=info:doi/10.1109/TUFFC.2021.3068156&rft_dat=%3Cproquest_RIE%3E2546719577%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2546719577&rft_id=info:pmid/33755564&rft_ieee_id=9383281&rfr_iscdi=true