Neural Encoding and Decoding With Distributed Sentence Representations

Building computational models to account for the cortical representation of language plays an important role in understanding the human linguistic system. Recent progress in distributed semantic models (DSMs), especially transformer-based methods, has driven advances in many language understanding t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2021-02, Vol.32 (2), p.589-603
Hauptverfasser: Sun, Jingyuan, Wang, Shaonan, Zhang, Jiajun, Zong, Chengqing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 603
container_issue 2
container_start_page 589
container_title IEEE transaction on neural networks and learning systems
container_volume 32
creator Sun, Jingyuan
Wang, Shaonan
Zhang, Jiajun
Zong, Chengqing
description Building computational models to account for the cortical representation of language plays an important role in understanding the human linguistic system. Recent progress in distributed semantic models (DSMs), especially transformer-based methods, has driven advances in many language understanding tasks, making DSM a promising methodology to probe brain language processing. DSMs have been shown to reliably explain cortical responses to word stimuli. However, characterizing the brain activities for sentence processing is much less exhaustively explored with DSMs, especially the deep neural network-based methods. What is the relationship between cortical sentence representations against DSMs? What linguistic features that a DSM catches better explain its correlation with the brain activities aroused by sentence stimuli? Could distributed sentence representations help to reveal the semantic selectivity of different brain areas? We address these questions through the lens of neural encoding and decoding, fueled by the latest developments in natural language representation learning. We begin by evaluating the ability of a wide range of 12 DSMs to predict and decipher the functional magnetic resonance imaging (fMRI) images from humans reading sentences. Most models deliver high accuracy in the left middle temporal gyrus (LMTG) and left occipital complex (LOC). Notably, encoders trained with transformer-based DSMs consistently outperform other unsupervised structured models and all the unstructured baselines. With probing and ablation tasks, we further find that differences in the performance of the DSMs in modeling brain activities can be at least partially explained by the granularity of their semantic representations. We also illustrate the DSM's selectivity for concept categories and show that the topics are represented by spatially overlapping and distributed cortical patterns. Our results corroborate and extend previous findings in understanding the relation between DSMs and neural activation patterns and contribute to building solid brain-machine interfaces with deep neural network representations.
doi_str_mv 10.1109/TNNLS.2020.3027595
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_2451380833</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9223750</ieee_id><sourcerecordid>2487437813</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-d36907d893527cdc15556022a0679519c1283e65dd1d7838f116a92170b0a0b43</originalsourceid><addsrcrecordid>eNpdkUtLw0AQgBdRbKn9AwoS8OIldXY3-zpKHyqUCrait7DJTjWlTepucvDfm9rag3uZGeabYfiWkEsKA0rB3C1ms-l8wIDBgANTwogT0mVUsphxrU-PuXrvkH4IK2ifBCETc046nINgWuoumcyw8XYdjcu8ckX5EdnSRSM8FG9F_RmNilD7ImtqdNEcyxrLHKMX3HoMbWXroirDBTlb2nXA_iH2yOtkvBg-xtPnh6fh_TTOuaB17Lg0oJw2XDCVu5wKISQwZkEqI6jJKdMcpXCOOqW5XlIqrWFUQQYWsoT3yO1-79ZXXw2GOt0UIcf12pZYNSFliaBcg-a8RW_-oauq8WV7XUtplXCl6Y5ieyr3VQgel-nWFxvrv1MK6U50-is63YlOD6LboevD6ibboDuO_Gltgas9UCDisW1Y-x0C-A-5bX8J</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2487437813</pqid></control><display><type>article</type><title>Neural Encoding and Decoding With Distributed Sentence Representations</title><source>IEEE Electronic Library (IEL)</source><creator>Sun, Jingyuan ; Wang, Shaonan ; Zhang, Jiajun ; Zong, Chengqing</creator><creatorcontrib>Sun, Jingyuan ; Wang, Shaonan ; Zhang, Jiajun ; Zong, Chengqing</creatorcontrib><description>Building computational models to account for the cortical representation of language plays an important role in understanding the human linguistic system. Recent progress in distributed semantic models (DSMs), especially transformer-based methods, has driven advances in many language understanding tasks, making DSM a promising methodology to probe brain language processing. DSMs have been shown to reliably explain cortical responses to word stimuli. However, characterizing the brain activities for sentence processing is much less exhaustively explored with DSMs, especially the deep neural network-based methods. What is the relationship between cortical sentence representations against DSMs? What linguistic features that a DSM catches better explain its correlation with the brain activities aroused by sentence stimuli? Could distributed sentence representations help to reveal the semantic selectivity of different brain areas? We address these questions through the lens of neural encoding and decoding, fueled by the latest developments in natural language representation learning. We begin by evaluating the ability of a wide range of 12 DSMs to predict and decipher the functional magnetic resonance imaging (fMRI) images from humans reading sentences. Most models deliver high accuracy in the left middle temporal gyrus (LMTG) and left occipital complex (LOC). Notably, encoders trained with transformer-based DSMs consistently outperform other unsupervised structured models and all the unstructured baselines. With probing and ablation tasks, we further find that differences in the performance of the DSMs in modeling brain activities can be at least partially explained by the granularity of their semantic representations. We also illustrate the DSM's selectivity for concept categories and show that the topics are represented by spatially overlapping and distributed cortical patterns. Our results corroborate and extend previous findings in understanding the relation between DSMs and neural activation patterns and contribute to building solid brain-machine interfaces with deep neural network representations.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2020.3027595</identifier><identifier>PMID: 33052868</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Ablation ; Algorithms ; Artificial neural networks ; Brain ; Brain - diagnostic imaging ; Brain mapping ; Brain modeling ; Brain-Computer Interfaces ; Brain–machine interfaces ; Cerebral Cortex - anatomy &amp; histology ; Cerebral Cortex - physiology ; Coders ; Computational neuroscience ; Computer Simulation ; Decoding ; Deep Learning ; distributed semantic representations ; Encoding ; Functional magnetic resonance imaging ; Humans ; Image Processing, Computer-Assisted ; Interfaces ; Language ; Linguistics ; Machine learning ; Magnetic Resonance Imaging ; Model accuracy ; Natural Language Processing ; Neural coding ; neural decoding ; neural encoding ; Neural networks ; Neural Networks, Computer ; Neuroimaging ; Occipital Lobe - diagnostic imaging ; Reading ; Representations ; Reproducibility of Results ; Selectivity ; Semantics ; Sentences ; Stimuli ; Task analysis ; Temporal gyrus ; Temporal Lobe - diagnostic imaging ; Transformers</subject><ispartof>IEEE transaction on neural networks and learning systems, 2021-02, Vol.32 (2), p.589-603</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-d36907d893527cdc15556022a0679519c1283e65dd1d7838f116a92170b0a0b43</citedby><cites>FETCH-LOGICAL-c351t-d36907d893527cdc15556022a0679519c1283e65dd1d7838f116a92170b0a0b43</cites><orcidid>0000-0001-5455-1359 ; 0000-0001-5293-7434 ; 0000-0002-9864-3818 ; 0000-0001-8745-6104</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9223750$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9223750$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33052868$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Sun, Jingyuan</creatorcontrib><creatorcontrib>Wang, Shaonan</creatorcontrib><creatorcontrib>Zhang, Jiajun</creatorcontrib><creatorcontrib>Zong, Chengqing</creatorcontrib><title>Neural Encoding and Decoding With Distributed Sentence Representations</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Building computational models to account for the cortical representation of language plays an important role in understanding the human linguistic system. Recent progress in distributed semantic models (DSMs), especially transformer-based methods, has driven advances in many language understanding tasks, making DSM a promising methodology to probe brain language processing. DSMs have been shown to reliably explain cortical responses to word stimuli. However, characterizing the brain activities for sentence processing is much less exhaustively explored with DSMs, especially the deep neural network-based methods. What is the relationship between cortical sentence representations against DSMs? What linguistic features that a DSM catches better explain its correlation with the brain activities aroused by sentence stimuli? Could distributed sentence representations help to reveal the semantic selectivity of different brain areas? We address these questions through the lens of neural encoding and decoding, fueled by the latest developments in natural language representation learning. We begin by evaluating the ability of a wide range of 12 DSMs to predict and decipher the functional magnetic resonance imaging (fMRI) images from humans reading sentences. Most models deliver high accuracy in the left middle temporal gyrus (LMTG) and left occipital complex (LOC). Notably, encoders trained with transformer-based DSMs consistently outperform other unsupervised structured models and all the unstructured baselines. With probing and ablation tasks, we further find that differences in the performance of the DSMs in modeling brain activities can be at least partially explained by the granularity of their semantic representations. We also illustrate the DSM's selectivity for concept categories and show that the topics are represented by spatially overlapping and distributed cortical patterns. Our results corroborate and extend previous findings in understanding the relation between DSMs and neural activation patterns and contribute to building solid brain-machine interfaces with deep neural network representations.</description><subject>Ablation</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Brain</subject><subject>Brain - diagnostic imaging</subject><subject>Brain mapping</subject><subject>Brain modeling</subject><subject>Brain-Computer Interfaces</subject><subject>Brain–machine interfaces</subject><subject>Cerebral Cortex - anatomy &amp; histology</subject><subject>Cerebral Cortex - physiology</subject><subject>Coders</subject><subject>Computational neuroscience</subject><subject>Computer Simulation</subject><subject>Decoding</subject><subject>Deep Learning</subject><subject>distributed semantic representations</subject><subject>Encoding</subject><subject>Functional magnetic resonance imaging</subject><subject>Humans</subject><subject>Image Processing, Computer-Assisted</subject><subject>Interfaces</subject><subject>Language</subject><subject>Linguistics</subject><subject>Machine learning</subject><subject>Magnetic Resonance Imaging</subject><subject>Model accuracy</subject><subject>Natural Language Processing</subject><subject>Neural coding</subject><subject>neural decoding</subject><subject>neural encoding</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Neuroimaging</subject><subject>Occipital Lobe - diagnostic imaging</subject><subject>Reading</subject><subject>Representations</subject><subject>Reproducibility of Results</subject><subject>Selectivity</subject><subject>Semantics</subject><subject>Sentences</subject><subject>Stimuli</subject><subject>Task analysis</subject><subject>Temporal gyrus</subject><subject>Temporal Lobe - diagnostic imaging</subject><subject>Transformers</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkUtLw0AQgBdRbKn9AwoS8OIldXY3-zpKHyqUCrait7DJTjWlTepucvDfm9rag3uZGeabYfiWkEsKA0rB3C1ms-l8wIDBgANTwogT0mVUsphxrU-PuXrvkH4IK2ifBCETc046nINgWuoumcyw8XYdjcu8ckX5EdnSRSM8FG9F_RmNilD7ImtqdNEcyxrLHKMX3HoMbWXroirDBTlb2nXA_iH2yOtkvBg-xtPnh6fh_TTOuaB17Lg0oJw2XDCVu5wKISQwZkEqI6jJKdMcpXCOOqW5XlIqrWFUQQYWsoT3yO1-79ZXXw2GOt0UIcf12pZYNSFliaBcg-a8RW_-oauq8WV7XUtplXCl6Y5ieyr3VQgel-nWFxvrv1MK6U50-is63YlOD6LboevD6ibboDuO_Gltgas9UCDisW1Y-x0C-A-5bX8J</recordid><startdate>20210201</startdate><enddate>20210201</enddate><creator>Sun, Jingyuan</creator><creator>Wang, Shaonan</creator><creator>Zhang, Jiajun</creator><creator>Zong, Chengqing</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5455-1359</orcidid><orcidid>https://orcid.org/0000-0001-5293-7434</orcidid><orcidid>https://orcid.org/0000-0002-9864-3818</orcidid><orcidid>https://orcid.org/0000-0001-8745-6104</orcidid></search><sort><creationdate>20210201</creationdate><title>Neural Encoding and Decoding With Distributed Sentence Representations</title><author>Sun, Jingyuan ; Wang, Shaonan ; Zhang, Jiajun ; Zong, Chengqing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-d36907d893527cdc15556022a0679519c1283e65dd1d7838f116a92170b0a0b43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Ablation</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Brain</topic><topic>Brain - diagnostic imaging</topic><topic>Brain mapping</topic><topic>Brain modeling</topic><topic>Brain-Computer Interfaces</topic><topic>Brain–machine interfaces</topic><topic>Cerebral Cortex - anatomy &amp; histology</topic><topic>Cerebral Cortex - physiology</topic><topic>Coders</topic><topic>Computational neuroscience</topic><topic>Computer Simulation</topic><topic>Decoding</topic><topic>Deep Learning</topic><topic>distributed semantic representations</topic><topic>Encoding</topic><topic>Functional magnetic resonance imaging</topic><topic>Humans</topic><topic>Image Processing, Computer-Assisted</topic><topic>Interfaces</topic><topic>Language</topic><topic>Linguistics</topic><topic>Machine learning</topic><topic>Magnetic Resonance Imaging</topic><topic>Model accuracy</topic><topic>Natural Language Processing</topic><topic>Neural coding</topic><topic>neural decoding</topic><topic>neural encoding</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Neuroimaging</topic><topic>Occipital Lobe - diagnostic imaging</topic><topic>Reading</topic><topic>Representations</topic><topic>Reproducibility of Results</topic><topic>Selectivity</topic><topic>Semantics</topic><topic>Sentences</topic><topic>Stimuli</topic><topic>Task analysis</topic><topic>Temporal gyrus</topic><topic>Temporal Lobe - diagnostic imaging</topic><topic>Transformers</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Jingyuan</creatorcontrib><creatorcontrib>Wang, Shaonan</creatorcontrib><creatorcontrib>Zhang, Jiajun</creatorcontrib><creatorcontrib>Zong, Chengqing</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Jingyuan</au><au>Wang, Shaonan</au><au>Zhang, Jiajun</au><au>Zong, Chengqing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural Encoding and Decoding With Distributed Sentence Representations</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2021-02-01</date><risdate>2021</risdate><volume>32</volume><issue>2</issue><spage>589</spage><epage>603</epage><pages>589-603</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Building computational models to account for the cortical representation of language plays an important role in understanding the human linguistic system. Recent progress in distributed semantic models (DSMs), especially transformer-based methods, has driven advances in many language understanding tasks, making DSM a promising methodology to probe brain language processing. DSMs have been shown to reliably explain cortical responses to word stimuli. However, characterizing the brain activities for sentence processing is much less exhaustively explored with DSMs, especially the deep neural network-based methods. What is the relationship between cortical sentence representations against DSMs? What linguistic features that a DSM catches better explain its correlation with the brain activities aroused by sentence stimuli? Could distributed sentence representations help to reveal the semantic selectivity of different brain areas? We address these questions through the lens of neural encoding and decoding, fueled by the latest developments in natural language representation learning. We begin by evaluating the ability of a wide range of 12 DSMs to predict and decipher the functional magnetic resonance imaging (fMRI) images from humans reading sentences. Most models deliver high accuracy in the left middle temporal gyrus (LMTG) and left occipital complex (LOC). Notably, encoders trained with transformer-based DSMs consistently outperform other unsupervised structured models and all the unstructured baselines. With probing and ablation tasks, we further find that differences in the performance of the DSMs in modeling brain activities can be at least partially explained by the granularity of their semantic representations. We also illustrate the DSM's selectivity for concept categories and show that the topics are represented by spatially overlapping and distributed cortical patterns. Our results corroborate and extend previous findings in understanding the relation between DSMs and neural activation patterns and contribute to building solid brain-machine interfaces with deep neural network representations.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>33052868</pmid><doi>10.1109/TNNLS.2020.3027595</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-5455-1359</orcidid><orcidid>https://orcid.org/0000-0001-5293-7434</orcidid><orcidid>https://orcid.org/0000-0002-9864-3818</orcidid><orcidid>https://orcid.org/0000-0001-8745-6104</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2021-02, Vol.32 (2), p.589-603
issn 2162-237X
2162-2388
language eng
recordid cdi_proquest_miscellaneous_2451380833
source IEEE Electronic Library (IEL)
subjects Ablation
Algorithms
Artificial neural networks
Brain
Brain - diagnostic imaging
Brain mapping
Brain modeling
Brain-Computer Interfaces
Brain–machine interfaces
Cerebral Cortex - anatomy & histology
Cerebral Cortex - physiology
Coders
Computational neuroscience
Computer Simulation
Decoding
Deep Learning
distributed semantic representations
Encoding
Functional magnetic resonance imaging
Humans
Image Processing, Computer-Assisted
Interfaces
Language
Linguistics
Machine learning
Magnetic Resonance Imaging
Model accuracy
Natural Language Processing
Neural coding
neural decoding
neural encoding
Neural networks
Neural Networks, Computer
Neuroimaging
Occipital Lobe - diagnostic imaging
Reading
Representations
Reproducibility of Results
Selectivity
Semantics
Sentences
Stimuli
Task analysis
Temporal gyrus
Temporal Lobe - diagnostic imaging
Transformers
title Neural Encoding and Decoding With Distributed Sentence Representations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T10%3A37%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20Encoding%20and%20Decoding%20With%20Distributed%20Sentence%20Representations&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Sun,%20Jingyuan&rft.date=2021-02-01&rft.volume=32&rft.issue=2&rft.spage=589&rft.epage=603&rft.pages=589-603&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2020.3027595&rft_dat=%3Cproquest_RIE%3E2487437813%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2487437813&rft_id=info:pmid/33052868&rft_ieee_id=9223750&rfr_iscdi=true