Differential Feature Fusion, Triplet Global Attention, and Web Semantic for Pedestrian Detection
In complex environments and crowded pedestrian scenes, the overlap or loss of local features is a pressing issue. However, existing methods often struggle to strike a balance between eliminating interfering features and establishing feature connections. To address this challenge, we introduce a nove...
Gespeichert in:
Veröffentlicht in: | International journal on semantic web and information systems 2024-01, Vol.20 (1), p.1-18 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 18 |
---|---|
container_issue | 1 |
container_start_page | 1 |
container_title | International journal on semantic web and information systems |
container_volume | 20 |
creator | Tao, Sha Wang, Zhenfeng |
description | In complex environments and crowded pedestrian scenes, the overlap or loss of local features is a pressing issue. However, existing methods often struggle to strike a balance between eliminating interfering features and establishing feature connections. To address this challenge, we introduce a novel pedestrian detection approach called Differential Feature Fusion under Triplet Global Attention (DFFTGA). This method merges feature maps of the same size from different stages to introduce richer feature information. Specifically, we introduce a pixel-level Triplet Global Attention (TGA) module to enhance feature representation and perceptual range. Additionally, we introduce a Differential Feature Fusion (DFF) module, which optimizes features between similar nodes for filtering. This series of operations helps the model focus more on discriminative features, ultimately improving pedestrian detection performance. Compared to benchmarks, we achieve significant improvements and demonstrate outstanding performance on datasets such as CityPersons and CrowdHuman. |
doi_str_mv | 10.4018/IJSWIS.345651 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_3082623287</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A820985266</galeid><sourcerecordid>A820985266</sourcerecordid><originalsourceid>FETCH-LOGICAL-c317t-3ce822924053a2b38d63e95153bda919364f6dc2a6a04f4a39dacf2caa8148813</originalsourceid><addsrcrecordid>eNptkUtLAzEQgBdR8Hn0HvDq1iSzidljqbZWBIUqHuM0O5HIdrdm04P_3i3rA0HmkGHmm8nAl2Wngo8KLszF_HbxPF-MoFBaiZ3sQCglcy1LsfuTG9jPDrvujXNQAOIge7kK3lOkJgWs2ZQwbSKx6aYLbXPOHmNY15TYrG6XfXuc0hbcdrCp2DMt2YJW2Jcc821kD1RRl2LAhl1RIrdFj7M9j3VHJ1_vUfY0vX6c3OR397P5ZHyXOxCXKQdHRspSFlwByiWYSgOVSihYVliKEnThdeUkauSFLxDKCp2XDtGIwhgBR9nZsHcd2_dNf4Z9azex6b-0wI3UEqS5_KVesSYbGt-miG4VOmfHRvLSKKl1T43-ofqoaBVc25APff3PQD4MuNh2XSRv1zGsMH5Ywe3WjR3c2MFNz08GPryG3zu_NdgvDXbQ8P8OyeET0tCWww</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3082623287</pqid></control><display><type>article</type><title>Differential Feature Fusion, Triplet Global Attention, and Web Semantic for Pedestrian Detection</title><source>Alma/SFX Local Collection</source><source>ProQuest Central</source><creator>Tao, Sha ; Wang, Zhenfeng</creator><creatorcontrib>Tao, Sha ; Wang, Zhenfeng</creatorcontrib><description>In complex environments and crowded pedestrian scenes, the overlap or loss of local features is a pressing issue. However, existing methods often struggle to strike a balance between eliminating interfering features and establishing feature connections. To address this challenge, we introduce a novel pedestrian detection approach called Differential Feature Fusion under Triplet Global Attention (DFFTGA). This method merges feature maps of the same size from different stages to introduce richer feature information. Specifically, we introduce a pixel-level Triplet Global Attention (TGA) module to enhance feature representation and perceptual range. Additionally, we introduce a Differential Feature Fusion (DFF) module, which optimizes features between similar nodes for filtering. This series of operations helps the model focus more on discriminative features, ultimately improving pedestrian detection performance. Compared to benchmarks, we achieve significant improvements and demonstrate outstanding performance on datasets such as CityPersons and CrowdHuman.</description><identifier>ISSN: 1552-6283</identifier><identifier>EISSN: 1552-6291</identifier><identifier>DOI: 10.4018/IJSWIS.345651</identifier><language>eng</language><publisher>Hershey: IGI Global</publisher><subject>Accuracy ; Feature maps ; Information systems ; Methods ; Modules ; Pedestrians ; Semantic web ; Semantics</subject><ispartof>International journal on semantic web and information systems, 2024-01, Vol.20 (1), p.1-18</ispartof><rights>COPYRIGHT 2024 IGI Global</rights><rights>2024. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c317t-3ce822924053a2b38d63e95153bda919364f6dc2a6a04f4a39dacf2caa8148813</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3082623287?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,776,780,21369,27903,27904,33723,43783</link.rule.ids></links><search><creatorcontrib>Tao, Sha</creatorcontrib><creatorcontrib>Wang, Zhenfeng</creatorcontrib><title>Differential Feature Fusion, Triplet Global Attention, and Web Semantic for Pedestrian Detection</title><title>International journal on semantic web and information systems</title><description>In complex environments and crowded pedestrian scenes, the overlap or loss of local features is a pressing issue. However, existing methods often struggle to strike a balance between eliminating interfering features and establishing feature connections. To address this challenge, we introduce a novel pedestrian detection approach called Differential Feature Fusion under Triplet Global Attention (DFFTGA). This method merges feature maps of the same size from different stages to introduce richer feature information. Specifically, we introduce a pixel-level Triplet Global Attention (TGA) module to enhance feature representation and perceptual range. Additionally, we introduce a Differential Feature Fusion (DFF) module, which optimizes features between similar nodes for filtering. This series of operations helps the model focus more on discriminative features, ultimately improving pedestrian detection performance. Compared to benchmarks, we achieve significant improvements and demonstrate outstanding performance on datasets such as CityPersons and CrowdHuman.</description><subject>Accuracy</subject><subject>Feature maps</subject><subject>Information systems</subject><subject>Methods</subject><subject>Modules</subject><subject>Pedestrians</subject><subject>Semantic web</subject><subject>Semantics</subject><issn>1552-6283</issn><issn>1552-6291</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNptkUtLAzEQgBdR8Hn0HvDq1iSzidljqbZWBIUqHuM0O5HIdrdm04P_3i3rA0HmkGHmm8nAl2Wngo8KLszF_HbxPF-MoFBaiZ3sQCglcy1LsfuTG9jPDrvujXNQAOIge7kK3lOkJgWs2ZQwbSKx6aYLbXPOHmNY15TYrG6XfXuc0hbcdrCp2DMt2YJW2Jcc821kD1RRl2LAhl1RIrdFj7M9j3VHJ1_vUfY0vX6c3OR397P5ZHyXOxCXKQdHRspSFlwByiWYSgOVSihYVliKEnThdeUkauSFLxDKCp2XDtGIwhgBR9nZsHcd2_dNf4Z9azex6b-0wI3UEqS5_KVesSYbGt-miG4VOmfHRvLSKKl1T43-ofqoaBVc25APff3PQD4MuNh2XSRv1zGsMH5Ywe3WjR3c2MFNz08GPryG3zu_NdgvDXbQ8P8OyeET0tCWww</recordid><startdate>20240101</startdate><enddate>20240101</enddate><creator>Tao, Sha</creator><creator>Wang, Zhenfeng</creator><general>IGI Global</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7S</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240101</creationdate><title>Differential Feature Fusion, Triplet Global Attention, and Web Semantic for Pedestrian Detection</title><author>Tao, Sha ; Wang, Zhenfeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c317t-3ce822924053a2b38d63e95153bda919364f6dc2a6a04f4a39dacf2caa8148813</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Feature maps</topic><topic>Information systems</topic><topic>Methods</topic><topic>Modules</topic><topic>Pedestrians</topic><topic>Semantic web</topic><topic>Semantics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tao, Sha</creatorcontrib><creatorcontrib>Wang, Zhenfeng</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Engineering Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>International journal on semantic web and information systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tao, Sha</au><au>Wang, Zhenfeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Differential Feature Fusion, Triplet Global Attention, and Web Semantic for Pedestrian Detection</atitle><jtitle>International journal on semantic web and information systems</jtitle><date>2024-01-01</date><risdate>2024</risdate><volume>20</volume><issue>1</issue><spage>1</spage><epage>18</epage><pages>1-18</pages><issn>1552-6283</issn><eissn>1552-6291</eissn><abstract>In complex environments and crowded pedestrian scenes, the overlap or loss of local features is a pressing issue. However, existing methods often struggle to strike a balance between eliminating interfering features and establishing feature connections. To address this challenge, we introduce a novel pedestrian detection approach called Differential Feature Fusion under Triplet Global Attention (DFFTGA). This method merges feature maps of the same size from different stages to introduce richer feature information. Specifically, we introduce a pixel-level Triplet Global Attention (TGA) module to enhance feature representation and perceptual range. Additionally, we introduce a Differential Feature Fusion (DFF) module, which optimizes features between similar nodes for filtering. This series of operations helps the model focus more on discriminative features, ultimately improving pedestrian detection performance. Compared to benchmarks, we achieve significant improvements and demonstrate outstanding performance on datasets such as CityPersons and CrowdHuman.</abstract><cop>Hershey</cop><pub>IGI Global</pub><doi>10.4018/IJSWIS.345651</doi><tpages>18</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1552-6283 |
ispartof | International journal on semantic web and information systems, 2024-01, Vol.20 (1), p.1-18 |
issn | 1552-6283 1552-6291 |
language | eng |
recordid | cdi_proquest_journals_3082623287 |
source | Alma/SFX Local Collection; ProQuest Central |
subjects | Accuracy Feature maps Information systems Methods Modules Pedestrians Semantic web Semantics |
title | Differential Feature Fusion, Triplet Global Attention, and Web Semantic for Pedestrian Detection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T07%3A22%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Differential%20Feature%20Fusion,%20Triplet%20Global%20Attention,%20and%20Web%20Semantic%20for%20Pedestrian%20Detection&rft.jtitle=International%20journal%20on%20semantic%20web%20and%20information%20systems&rft.au=Tao,%20Sha&rft.date=2024-01-01&rft.volume=20&rft.issue=1&rft.spage=1&rft.epage=18&rft.pages=1-18&rft.issn=1552-6283&rft.eissn=1552-6291&rft_id=info:doi/10.4018/IJSWIS.345651&rft_dat=%3Cgale_proqu%3EA820985266%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3082623287&rft_id=info:pmid/&rft_galeid=A820985266&rfr_iscdi=true |