Multi-modal fusion in ergonomic health: bridging visual and pressure for sitting posture detection
As the contradiction between the pursuit of health and the increasing duration of sedentary office work intensifies, there has been a growing focus on maintaining correct sitting posture while working in recent years. Scientific studies have shown that sitting posture correction plays a positive rol...
Gespeichert in:
Veröffentlicht in: | CCF transactions on pervasive computing and interaction (Online) 2024-12, Vol.6 (4), p.380-393 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 393 |
---|---|
container_issue | 4 |
container_start_page | 380 |
container_title | CCF transactions on pervasive computing and interaction (Online) |
container_volume | 6 |
creator | Quan, Qinxiao Gao, Yang Bai, Yang Jin, Zhanpeng |
description | As the contradiction between the pursuit of health and the increasing duration of sedentary office work intensifies, there has been a growing focus on maintaining correct sitting posture while working in recent years. Scientific studies have shown that sitting posture correction plays a positive role in alleviating physical pain. With the rapid development of artificial intelligence technology, a significant amount of research has shifted towards implementing sitting posture detection and recognition systems using machine learning approaches. In this paper, we introduce an innovative sitting posture recognition system that integrates visual and pressure modalities. The system employs a differentiated pre-training strategy for training the bimodal models and features a feature fusion module designed based on feed-forward networks. Our system utilizes commonly available built-in cameras in laptops for collecting visual data and thin-film pressure sensor mats for pressure data in office scenarios. It achieved an F1-Macro score of 95.43% on a dataset with complex composite actions, marking an improvement of 7.13% and 10.79% over systems that rely solely on pressure or visual modalities, respectively, and a 7.07% improvement over systems using a uniform pre-training strategy. |
doi_str_mv | 10.1007/s42486-024-00164-x |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3149645226</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3149645226</sourcerecordid><originalsourceid>FETCH-LOGICAL-c156t-7f7bbc79bd4d5e320ae9ddcf28435f41461f6a01bc762851c5b978568f3479c03</originalsourceid><addsrcrecordid>eNo9kMtKAzEUhoMoWGpfwFXAdTTJJJkZd1K8QcWNgruQyaVNmU7GJCP17U2tuDqX_-Mc-AC4JPiaYFzfJEZZIxCmDGFMBEP7EzCjvIyc0ub0vycf52CR0hZjTGuCSzYD3cvUZ492wageuin5MEA_QBvXYQg7r-HGqj5vbmEXvVn7YQ2_fJoKqwYDx2hTmqKFLkSYfM6HfAwpH3bGZqtzuXcBzpzqk1381Tl4f7h_Wz6h1evj8_JuhTThIqPa1V2n67YzzHBbUaxsa4x2tGEVd4wwQZxQmBRG0IYTzbu2brhoXMXqVuNqDq6Od8cYPiebstyGKQ7lpawIawUrNkSh6JHSMaQUrZNj9DsVvyXB8qBTHnXKolP-6pT76gcFy2l_</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3149645226</pqid></control><display><type>article</type><title>Multi-modal fusion in ergonomic health: bridging visual and pressure for sitting posture detection</title><source>SpringerLink Journals</source><creator>Quan, Qinxiao ; Gao, Yang ; Bai, Yang ; Jin, Zhanpeng</creator><creatorcontrib>Quan, Qinxiao ; Gao, Yang ; Bai, Yang ; Jin, Zhanpeng</creatorcontrib><description>As the contradiction between the pursuit of health and the increasing duration of sedentary office work intensifies, there has been a growing focus on maintaining correct sitting posture while working in recent years. Scientific studies have shown that sitting posture correction plays a positive role in alleviating physical pain. With the rapid development of artificial intelligence technology, a significant amount of research has shifted towards implementing sitting posture detection and recognition systems using machine learning approaches. In this paper, we introduce an innovative sitting posture recognition system that integrates visual and pressure modalities. The system employs a differentiated pre-training strategy for training the bimodal models and features a feature fusion module designed based on feed-forward networks. Our system utilizes commonly available built-in cameras in laptops for collecting visual data and thin-film pressure sensor mats for pressure data in office scenarios. It achieved an F1-Macro score of 95.43% on a dataset with complex composite actions, marking an improvement of 7.13% and 10.79% over systems that rely solely on pressure or visual modalities, respectively, and a 7.07% improvement over systems using a uniform pre-training strategy.</description><identifier>ISSN: 2524-521X</identifier><identifier>EISSN: 2524-5228</identifier><identifier>DOI: 10.1007/s42486-024-00164-x</identifier><language>eng</language><publisher>Harbin: Springer Nature B.V</publisher><subject>Accuracy ; Algorithms ; Artificial intelligence ; Cameras ; Classification ; Data processing ; Machine learning ; Monitoring systems ; Neural networks ; Posture ; Pressure sensors ; R&D ; Research & development ; Sedentary behavior ; Sensors ; Thin films</subject><ispartof>CCF transactions on pervasive computing and interaction (Online), 2024-12, Vol.6 (4), p.380-393</ispartof><rights>Copyright Springer Nature B.V. Dec 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c156t-7f7bbc79bd4d5e320ae9ddcf28435f41461f6a01bc762851c5b978568f3479c03</cites><orcidid>0000-0002-3020-3736</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Quan, Qinxiao</creatorcontrib><creatorcontrib>Gao, Yang</creatorcontrib><creatorcontrib>Bai, Yang</creatorcontrib><creatorcontrib>Jin, Zhanpeng</creatorcontrib><title>Multi-modal fusion in ergonomic health: bridging visual and pressure for sitting posture detection</title><title>CCF transactions on pervasive computing and interaction (Online)</title><description>As the contradiction between the pursuit of health and the increasing duration of sedentary office work intensifies, there has been a growing focus on maintaining correct sitting posture while working in recent years. Scientific studies have shown that sitting posture correction plays a positive role in alleviating physical pain. With the rapid development of artificial intelligence technology, a significant amount of research has shifted towards implementing sitting posture detection and recognition systems using machine learning approaches. In this paper, we introduce an innovative sitting posture recognition system that integrates visual and pressure modalities. The system employs a differentiated pre-training strategy for training the bimodal models and features a feature fusion module designed based on feed-forward networks. Our system utilizes commonly available built-in cameras in laptops for collecting visual data and thin-film pressure sensor mats for pressure data in office scenarios. It achieved an F1-Macro score of 95.43% on a dataset with complex composite actions, marking an improvement of 7.13% and 10.79% over systems that rely solely on pressure or visual modalities, respectively, and a 7.07% improvement over systems using a uniform pre-training strategy.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Cameras</subject><subject>Classification</subject><subject>Data processing</subject><subject>Machine learning</subject><subject>Monitoring systems</subject><subject>Neural networks</subject><subject>Posture</subject><subject>Pressure sensors</subject><subject>R&D</subject><subject>Research & development</subject><subject>Sedentary behavior</subject><subject>Sensors</subject><subject>Thin films</subject><issn>2524-521X</issn><issn>2524-5228</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNo9kMtKAzEUhoMoWGpfwFXAdTTJJJkZd1K8QcWNgruQyaVNmU7GJCP17U2tuDqX_-Mc-AC4JPiaYFzfJEZZIxCmDGFMBEP7EzCjvIyc0ub0vycf52CR0hZjTGuCSzYD3cvUZ492wageuin5MEA_QBvXYQg7r-HGqj5vbmEXvVn7YQ2_fJoKqwYDx2hTmqKFLkSYfM6HfAwpH3bGZqtzuXcBzpzqk1381Tl4f7h_Wz6h1evj8_JuhTThIqPa1V2n67YzzHBbUaxsa4x2tGEVd4wwQZxQmBRG0IYTzbu2brhoXMXqVuNqDq6Od8cYPiebstyGKQ7lpawIawUrNkSh6JHSMaQUrZNj9DsVvyXB8qBTHnXKolP-6pT76gcFy2l_</recordid><startdate>202412</startdate><enddate>202412</enddate><creator>Quan, Qinxiao</creator><creator>Gao, Yang</creator><creator>Bai, Yang</creator><creator>Jin, Zhanpeng</creator><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope><orcidid>https://orcid.org/0000-0002-3020-3736</orcidid></search><sort><creationdate>202412</creationdate><title>Multi-modal fusion in ergonomic health: bridging visual and pressure for sitting posture detection</title><author>Quan, Qinxiao ; Gao, Yang ; Bai, Yang ; Jin, Zhanpeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c156t-7f7bbc79bd4d5e320ae9ddcf28435f41461f6a01bc762851c5b978568f3479c03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Cameras</topic><topic>Classification</topic><topic>Data processing</topic><topic>Machine learning</topic><topic>Monitoring systems</topic><topic>Neural networks</topic><topic>Posture</topic><topic>Pressure sensors</topic><topic>R&D</topic><topic>Research & development</topic><topic>Sedentary behavior</topic><topic>Sensors</topic><topic>Thin films</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Quan, Qinxiao</creatorcontrib><creatorcontrib>Gao, Yang</creatorcontrib><creatorcontrib>Bai, Yang</creatorcontrib><creatorcontrib>Jin, Zhanpeng</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><jtitle>CCF transactions on pervasive computing and interaction (Online)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Quan, Qinxiao</au><au>Gao, Yang</au><au>Bai, Yang</au><au>Jin, Zhanpeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-modal fusion in ergonomic health: bridging visual and pressure for sitting posture detection</atitle><jtitle>CCF transactions on pervasive computing and interaction (Online)</jtitle><date>2024-12</date><risdate>2024</risdate><volume>6</volume><issue>4</issue><spage>380</spage><epage>393</epage><pages>380-393</pages><issn>2524-521X</issn><eissn>2524-5228</eissn><abstract>As the contradiction between the pursuit of health and the increasing duration of sedentary office work intensifies, there has been a growing focus on maintaining correct sitting posture while working in recent years. Scientific studies have shown that sitting posture correction plays a positive role in alleviating physical pain. With the rapid development of artificial intelligence technology, a significant amount of research has shifted towards implementing sitting posture detection and recognition systems using machine learning approaches. In this paper, we introduce an innovative sitting posture recognition system that integrates visual and pressure modalities. The system employs a differentiated pre-training strategy for training the bimodal models and features a feature fusion module designed based on feed-forward networks. Our system utilizes commonly available built-in cameras in laptops for collecting visual data and thin-film pressure sensor mats for pressure data in office scenarios. It achieved an F1-Macro score of 95.43% on a dataset with complex composite actions, marking an improvement of 7.13% and 10.79% over systems that rely solely on pressure or visual modalities, respectively, and a 7.07% improvement over systems using a uniform pre-training strategy.</abstract><cop>Harbin</cop><pub>Springer Nature B.V</pub><doi>10.1007/s42486-024-00164-x</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-3020-3736</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2524-521X |
ispartof | CCF transactions on pervasive computing and interaction (Online), 2024-12, Vol.6 (4), p.380-393 |
issn | 2524-521X 2524-5228 |
language | eng |
recordid | cdi_proquest_journals_3149645226 |
source | SpringerLink Journals |
subjects | Accuracy Algorithms Artificial intelligence Cameras Classification Data processing Machine learning Monitoring systems Neural networks Posture Pressure sensors R&D Research & development Sedentary behavior Sensors Thin films |
title | Multi-modal fusion in ergonomic health: bridging visual and pressure for sitting posture detection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-16T03%3A30%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-modal%20fusion%20in%20ergonomic%20health:%20bridging%20visual%20and%20pressure%20for%20sitting%20posture%20detection&rft.jtitle=CCF%20transactions%20on%20pervasive%20computing%20and%20interaction%20(Online)&rft.au=Quan,%20Qinxiao&rft.date=2024-12&rft.volume=6&rft.issue=4&rft.spage=380&rft.epage=393&rft.pages=380-393&rft.issn=2524-521X&rft.eissn=2524-5228&rft_id=info:doi/10.1007/s42486-024-00164-x&rft_dat=%3Cproquest_cross%3E3149645226%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3149645226&rft_id=info:pmid/&rfr_iscdi=true |