Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks

Crop discrimination at the plant or patch level is vital for modern technology-enabled agriculture. Multispectral and hyperspectral remote sensing data have been widely used for crop classification. Even though spectral data are successful in classifying row-crops and orchards, they are limited in d...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Precision agriculture 2021-10, Vol.22 (5), p.1617-1633
Hauptverfasser: Jayakumari, Reji, Nidamanuri, Rama Rao, Ramiya, Anandakumar M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1633
container_issue 5
container_start_page 1617
container_title Precision agriculture
container_volume 22
creator Jayakumari, Reji
Nidamanuri, Rama Rao
Ramiya, Anandakumar M.
description Crop discrimination at the plant or patch level is vital for modern technology-enabled agriculture. Multispectral and hyperspectral remote sensing data have been widely used for crop classification. Even though spectral data are successful in classifying row-crops and orchards, they are limited in discriminating vegetable and cereal crops at plant or patch level. Terrestrial laser scanning is a potential remote sensing approach that offers distinct structural features useful for classification of crops at plant or patch level. The objective of this research is the improvement and application of an advanced deep learning framework for object-based classification of three vegetable crops: cabbage, tomato, and eggplant using high-resolution LiDAR point cloud. Point clouds from a terrestrial laser scanner (TLS) were acquired over experimental plots of the University of Agricultural Sciences, Bengaluru, India. As part of the methodology, a deep convolution neural network (CNN) model named CropPointNet is devised for the semantic segmentation of crops from a 3D perspective. The CropPointNet is an adaptation of the PointNet deep CNN model developed for the segmentation of indoor objects in a typical computer vision scenario. Apart from adapting to 3D point cloud segmentation of crops, the significant methodological improvements made in the CropPointNet are a random sampling scheme for training point cloud, and optimization of the network architecture to enable structural attribute-based segmentation of point clouds of unstructured objects such as TLS point clouds crops. The performance of the 3D crop classification has been validated and compared against two popular deep learning architectures: PointNet, and the Dynamic Graph-based Convolutional Neural Network (DGCNN). Results indicate consistent plant level object-based classification of crop point cloud with overall accuracies of 81% or better for all the three crops. The CropPointNet architecture proposed in this research can be generalized for segmentation and classification of other row crops and natural vegetation types.
doi_str_mv 10.1007/s11119-021-09803-0
format Article
fullrecord <record><control><sourceid>proquest_webof</sourceid><recordid>TN_cdi_proquest_journals_2572357363</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2572357363</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-5dbc50633bf0e92bee79ec8a7ced6d2eed262b6746dbab6762f73aea15d0ce773</originalsourceid><addsrcrecordid>eNqNkUGLFDEQhYO44DrrH_AU8CjRSrKddB-XWdcVBhZEzyGdrh4ytkmbpGf135uZXvQm5lJV8L7i1Qshrzm84wD6feb1dQwEZ9C1IBk8I5e80ZJxxdvntZdtw4Ro1AvyMucDQMWuxSX5-dAf0BU24REn6iabsx-9s8XHQONIj7jHYvsJqUtxztQHKm_pzt_efKZz9KFUJi4DXbIPezogznRCm8JpcjEc47ScVtmJBlzSuZTHmL7lK3Ix2injq6e6IV_vPnzZ3rPdw8dP25sdc5J3hTVD7xpQUvYjYCd6RN2ha612OKhBIA5CiV7pazX0tlYlRi0tWt4M4FBruSFv1r1zij8WzMUc4pKqoWxEo4WsGdXtGyJWVb0y54SjmZP_btMvw8GcEjZrwqYmbM4JG6jQ2xV6xD6O2XkMDv-AACfbUom2dsCruv1_9daX8xds4xJKReWK5ioPe0x_b_iHvd_j7KGI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2572357363</pqid></control><display><type>article</type><title>Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks</title><source>SpringerNature Journals</source><source>Web of Science - Science Citation Index Expanded - 2021&lt;img src="https://exlibris-pub.s3.amazonaws.com/fromwos-v2.jpg" /&gt;</source><creator>Jayakumari, Reji ; Nidamanuri, Rama Rao ; Ramiya, Anandakumar M.</creator><creatorcontrib>Jayakumari, Reji ; Nidamanuri, Rama Rao ; Ramiya, Anandakumar M.</creatorcontrib><description>Crop discrimination at the plant or patch level is vital for modern technology-enabled agriculture. Multispectral and hyperspectral remote sensing data have been widely used for crop classification. Even though spectral data are successful in classifying row-crops and orchards, they are limited in discriminating vegetable and cereal crops at plant or patch level. Terrestrial laser scanning is a potential remote sensing approach that offers distinct structural features useful for classification of crops at plant or patch level. The objective of this research is the improvement and application of an advanced deep learning framework for object-based classification of three vegetable crops: cabbage, tomato, and eggplant using high-resolution LiDAR point cloud. Point clouds from a terrestrial laser scanner (TLS) were acquired over experimental plots of the University of Agricultural Sciences, Bengaluru, India. As part of the methodology, a deep convolution neural network (CNN) model named CropPointNet is devised for the semantic segmentation of crops from a 3D perspective. The CropPointNet is an adaptation of the PointNet deep CNN model developed for the segmentation of indoor objects in a typical computer vision scenario. Apart from adapting to 3D point cloud segmentation of crops, the significant methodological improvements made in the CropPointNet are a random sampling scheme for training point cloud, and optimization of the network architecture to enable structural attribute-based segmentation of point clouds of unstructured objects such as TLS point clouds crops. The performance of the 3D crop classification has been validated and compared against two popular deep learning architectures: PointNet, and the Dynamic Graph-based Convolutional Neural Network (DGCNN). Results indicate consistent plant level object-based classification of crop point cloud with overall accuracies of 81% or better for all the three crops. The CropPointNet architecture proposed in this research can be generalized for segmentation and classification of other row crops and natural vegetation types.</description><identifier>ISSN: 1385-2256</identifier><identifier>EISSN: 1573-1618</identifier><identifier>DOI: 10.1007/s11119-021-09803-0</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Agricultural sciences ; Agriculture ; Agriculture, Multidisciplinary ; Artificial neural networks ; Atmospheric Sciences ; Biomedical and Life Sciences ; Cereal crops ; Chemistry and Earth Sciences ; Classification ; Cloud computing ; Computer architecture ; Computer Science ; Computer vision ; Crops ; Deep learning ; Image segmentation ; Laser applications ; Lidar ; Life Sciences ; Life Sciences &amp; Biomedicine ; Machine learning ; Natural vegetation ; Neural networks ; Optimization ; Orchards ; Physics ; Random sampling ; Remote sensing ; Remote Sensing/Photogrammetry ; Science &amp; Technology ; Soil Science &amp; Conservation ; Statistical sampling ; Statistics for Engineering ; Three dimensional models ; Tomatoes ; Vegetables</subject><ispartof>Precision agriculture, 2021-10, Vol.22 (5), p.1617-1633</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>true</woscitedreferencessubscribed><woscitedreferencescount>30</woscitedreferencescount><woscitedreferencesoriginalsourcerecordid>wos000633362800001</woscitedreferencesoriginalsourcerecordid><citedby>FETCH-LOGICAL-c319t-5dbc50633bf0e92bee79ec8a7ced6d2eed262b6746dbab6762f73aea15d0ce773</citedby><cites>FETCH-LOGICAL-c319t-5dbc50633bf0e92bee79ec8a7ced6d2eed262b6746dbab6762f73aea15d0ce773</cites><orcidid>0000-0003-3930-6595</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11119-021-09803-0$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11119-021-09803-0$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>315,781,785,27929,27930,39263,41493,42562,51324</link.rule.ids></links><search><creatorcontrib>Jayakumari, Reji</creatorcontrib><creatorcontrib>Nidamanuri, Rama Rao</creatorcontrib><creatorcontrib>Ramiya, Anandakumar M.</creatorcontrib><title>Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks</title><title>Precision agriculture</title><addtitle>Precision Agric</addtitle><addtitle>PRECIS AGRIC</addtitle><description>Crop discrimination at the plant or patch level is vital for modern technology-enabled agriculture. Multispectral and hyperspectral remote sensing data have been widely used for crop classification. Even though spectral data are successful in classifying row-crops and orchards, they are limited in discriminating vegetable and cereal crops at plant or patch level. Terrestrial laser scanning is a potential remote sensing approach that offers distinct structural features useful for classification of crops at plant or patch level. The objective of this research is the improvement and application of an advanced deep learning framework for object-based classification of three vegetable crops: cabbage, tomato, and eggplant using high-resolution LiDAR point cloud. Point clouds from a terrestrial laser scanner (TLS) were acquired over experimental plots of the University of Agricultural Sciences, Bengaluru, India. As part of the methodology, a deep convolution neural network (CNN) model named CropPointNet is devised for the semantic segmentation of crops from a 3D perspective. The CropPointNet is an adaptation of the PointNet deep CNN model developed for the segmentation of indoor objects in a typical computer vision scenario. Apart from adapting to 3D point cloud segmentation of crops, the significant methodological improvements made in the CropPointNet are a random sampling scheme for training point cloud, and optimization of the network architecture to enable structural attribute-based segmentation of point clouds of unstructured objects such as TLS point clouds crops. The performance of the 3D crop classification has been validated and compared against two popular deep learning architectures: PointNet, and the Dynamic Graph-based Convolutional Neural Network (DGCNN). Results indicate consistent plant level object-based classification of crop point cloud with overall accuracies of 81% or better for all the three crops. The CropPointNet architecture proposed in this research can be generalized for segmentation and classification of other row crops and natural vegetation types.</description><subject>Agricultural sciences</subject><subject>Agriculture</subject><subject>Agriculture, Multidisciplinary</subject><subject>Artificial neural networks</subject><subject>Atmospheric Sciences</subject><subject>Biomedical and Life Sciences</subject><subject>Cereal crops</subject><subject>Chemistry and Earth Sciences</subject><subject>Classification</subject><subject>Cloud computing</subject><subject>Computer architecture</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Crops</subject><subject>Deep learning</subject><subject>Image segmentation</subject><subject>Laser applications</subject><subject>Lidar</subject><subject>Life Sciences</subject><subject>Life Sciences &amp; Biomedicine</subject><subject>Machine learning</subject><subject>Natural vegetation</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Orchards</subject><subject>Physics</subject><subject>Random sampling</subject><subject>Remote sensing</subject><subject>Remote Sensing/Photogrammetry</subject><subject>Science &amp; Technology</subject><subject>Soil Science &amp; Conservation</subject><subject>Statistical sampling</subject><subject>Statistics for Engineering</subject><subject>Three dimensional models</subject><subject>Tomatoes</subject><subject>Vegetables</subject><issn>1385-2256</issn><issn>1573-1618</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>HGBXW</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNqNkUGLFDEQhYO44DrrH_AU8CjRSrKddB-XWdcVBhZEzyGdrh4ytkmbpGf135uZXvQm5lJV8L7i1Qshrzm84wD6feb1dQwEZ9C1IBk8I5e80ZJxxdvntZdtw4Ro1AvyMucDQMWuxSX5-dAf0BU24REn6iabsx-9s8XHQONIj7jHYvsJqUtxztQHKm_pzt_efKZz9KFUJi4DXbIPezogznRCm8JpcjEc47ScVtmJBlzSuZTHmL7lK3Ix2injq6e6IV_vPnzZ3rPdw8dP25sdc5J3hTVD7xpQUvYjYCd6RN2ha612OKhBIA5CiV7pazX0tlYlRi0tWt4M4FBruSFv1r1zij8WzMUc4pKqoWxEo4WsGdXtGyJWVb0y54SjmZP_btMvw8GcEjZrwqYmbM4JG6jQ2xV6xD6O2XkMDv-AACfbUom2dsCruv1_9daX8xds4xJKReWK5ioPe0x_b_iHvd_j7KGI</recordid><startdate>20211001</startdate><enddate>20211001</enddate><creator>Jayakumari, Reji</creator><creator>Nidamanuri, Rama Rao</creator><creator>Ramiya, Anandakumar M.</creator><general>Springer US</general><general>Springer Nature</general><general>Springer Nature B.V</general><scope>BLEPL</scope><scope>DTL</scope><scope>HGBXW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7ST</scope><scope>7WY</scope><scope>7WZ</scope><scope>7X2</scope><scope>7XB</scope><scope>87Z</scope><scope>88I</scope><scope>8FE</scope><scope>8FH</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>K60</scope><scope>K6~</scope><scope>L.-</scope><scope>M0C</scope><scope>M0K</scope><scope>M2P</scope><scope>PATMY</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYCSY</scope><scope>Q9U</scope><scope>SOI</scope><orcidid>https://orcid.org/0000-0003-3930-6595</orcidid></search><sort><creationdate>20211001</creationdate><title>Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks</title><author>Jayakumari, Reji ; Nidamanuri, Rama Rao ; Ramiya, Anandakumar M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-5dbc50633bf0e92bee79ec8a7ced6d2eed262b6746dbab6762f73aea15d0ce773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Agricultural sciences</topic><topic>Agriculture</topic><topic>Agriculture, Multidisciplinary</topic><topic>Artificial neural networks</topic><topic>Atmospheric Sciences</topic><topic>Biomedical and Life Sciences</topic><topic>Cereal crops</topic><topic>Chemistry and Earth Sciences</topic><topic>Classification</topic><topic>Cloud computing</topic><topic>Computer architecture</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Crops</topic><topic>Deep learning</topic><topic>Image segmentation</topic><topic>Laser applications</topic><topic>Lidar</topic><topic>Life Sciences</topic><topic>Life Sciences &amp; Biomedicine</topic><topic>Machine learning</topic><topic>Natural vegetation</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Orchards</topic><topic>Physics</topic><topic>Random sampling</topic><topic>Remote sensing</topic><topic>Remote Sensing/Photogrammetry</topic><topic>Science &amp; Technology</topic><topic>Soil Science &amp; Conservation</topic><topic>Statistical sampling</topic><topic>Statistics for Engineering</topic><topic>Three dimensional models</topic><topic>Tomatoes</topic><topic>Vegetables</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jayakumari, Reji</creatorcontrib><creatorcontrib>Nidamanuri, Rama Rao</creatorcontrib><creatorcontrib>Ramiya, Anandakumar M.</creatorcontrib><collection>Web of Science Core Collection</collection><collection>Science Citation Index Expanded</collection><collection>Web of Science - Science Citation Index Expanded - 2021</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Environment Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>Agricultural Science Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Agricultural &amp; Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ABI/INFORM Global</collection><collection>Agricultural Science Database</collection><collection>Science Database</collection><collection>Environmental Science Database</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Environmental Science Collection</collection><collection>ProQuest Central Basic</collection><collection>Environment Abstracts</collection><jtitle>Precision agriculture</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jayakumari, Reji</au><au>Nidamanuri, Rama Rao</au><au>Ramiya, Anandakumar M.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks</atitle><jtitle>Precision agriculture</jtitle><stitle>Precision Agric</stitle><stitle>PRECIS AGRIC</stitle><date>2021-10-01</date><risdate>2021</risdate><volume>22</volume><issue>5</issue><spage>1617</spage><epage>1633</epage><pages>1617-1633</pages><issn>1385-2256</issn><eissn>1573-1618</eissn><abstract>Crop discrimination at the plant or patch level is vital for modern technology-enabled agriculture. Multispectral and hyperspectral remote sensing data have been widely used for crop classification. Even though spectral data are successful in classifying row-crops and orchards, they are limited in discriminating vegetable and cereal crops at plant or patch level. Terrestrial laser scanning is a potential remote sensing approach that offers distinct structural features useful for classification of crops at plant or patch level. The objective of this research is the improvement and application of an advanced deep learning framework for object-based classification of three vegetable crops: cabbage, tomato, and eggplant using high-resolution LiDAR point cloud. Point clouds from a terrestrial laser scanner (TLS) were acquired over experimental plots of the University of Agricultural Sciences, Bengaluru, India. As part of the methodology, a deep convolution neural network (CNN) model named CropPointNet is devised for the semantic segmentation of crops from a 3D perspective. The CropPointNet is an adaptation of the PointNet deep CNN model developed for the segmentation of indoor objects in a typical computer vision scenario. Apart from adapting to 3D point cloud segmentation of crops, the significant methodological improvements made in the CropPointNet are a random sampling scheme for training point cloud, and optimization of the network architecture to enable structural attribute-based segmentation of point clouds of unstructured objects such as TLS point clouds crops. The performance of the 3D crop classification has been validated and compared against two popular deep learning architectures: PointNet, and the Dynamic Graph-based Convolutional Neural Network (DGCNN). Results indicate consistent plant level object-based classification of crop point cloud with overall accuracies of 81% or better for all the three crops. The CropPointNet architecture proposed in this research can be generalized for segmentation and classification of other row crops and natural vegetation types.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11119-021-09803-0</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0003-3930-6595</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1385-2256
ispartof Precision agriculture, 2021-10, Vol.22 (5), p.1617-1633
issn 1385-2256
1573-1618
language eng
recordid cdi_proquest_journals_2572357363
source SpringerNature Journals; Web of Science - Science Citation Index Expanded - 2021<img src="https://exlibris-pub.s3.amazonaws.com/fromwos-v2.jpg" />
subjects Agricultural sciences
Agriculture
Agriculture, Multidisciplinary
Artificial neural networks
Atmospheric Sciences
Biomedical and Life Sciences
Cereal crops
Chemistry and Earth Sciences
Classification
Cloud computing
Computer architecture
Computer Science
Computer vision
Crops
Deep learning
Image segmentation
Laser applications
Lidar
Life Sciences
Life Sciences & Biomedicine
Machine learning
Natural vegetation
Neural networks
Optimization
Orchards
Physics
Random sampling
Remote sensing
Remote Sensing/Photogrammetry
Science & Technology
Soil Science & Conservation
Statistical sampling
Statistics for Engineering
Three dimensional models
Tomatoes
Vegetables
title Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T09%3A23%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_webof&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Object-level%20classification%20of%20vegetable%20crops%20in%203D%20LiDAR%20point%20cloud%20using%20deep%20learning%20convolutional%20neural%20networks&rft.jtitle=Precision%20agriculture&rft.au=Jayakumari,%20Reji&rft.date=2021-10-01&rft.volume=22&rft.issue=5&rft.spage=1617&rft.epage=1633&rft.pages=1617-1633&rft.issn=1385-2256&rft.eissn=1573-1618&rft_id=info:doi/10.1007/s11119-021-09803-0&rft_dat=%3Cproquest_webof%3E2572357363%3C/proquest_webof%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2572357363&rft_id=info:pmid/&rfr_iscdi=true