Employing Deep Reinforcement Learning to Maximize Lower Limb Blood Flow Using Intermittent Pneumatic Compression
Intermittent pneumatic compression (IPC) systems apply external pressure to the lower limbs and enhance peripheral blood flow. We previously introduced a cardiac-gated compression system that enhanced arterial blood velocity (BV) in the lower limb compared to fixed compression timing (CT) for seated...
Gespeichert in:
Veröffentlicht in: | IEEE journal of biomedical and health informatics 2024-10, Vol.28 (10), p.6193-6200 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6200 |
---|---|
container_issue | 10 |
container_start_page | 6193 |
container_title | IEEE journal of biomedical and health informatics |
container_volume | 28 |
creator | Santelices, Iara B. Landry, Cederick Arami, Arash Peterson, Sean D. |
description | Intermittent pneumatic compression (IPC) systems apply external pressure to the lower limbs and enhance peripheral blood flow. We previously introduced a cardiac-gated compression system that enhanced arterial blood velocity (BV) in the lower limb compared to fixed compression timing (CT) for seated and standing subjects. However, these pilot studies found that the CT that maximized BV was not constant across individuals and could change over time. Current CT modelling methods for IPC are limited to predictions for a single day and one heartbeat ahead. However, IPC therapy for may span weeks or longer, the BV response to compression can vary with physiological state, and the best CT for eliciting the desired physiological outcome may change, even for the same individual. We propose that a deep reinforcement learning (DRL) algorithm can learn and adaptively modify CT to achieve a selected outcome using IPC. Herein, we target maximizing lower limb arterial BV as the desired outcome and build participant-specific simulated lower limb environments for 6 participants. We show that DRL can adaptively learn the CT for IPC that maximized arterial BV. Compared to previous work, the DRL agent achieves 98% \pm 2 of the resultant blood flow and is faster at maximizing BV; the DRL agent can learn an "optimal" policy in 15 minutes \pm 2 on average and can adapt on the fly. Given a desired objective, we posit that the proposed DRL agent can be implemented in IPC systems to rapidly learn the (potentially time-varying) "optimal" CT with a human-in-the-loop. |
doi_str_mv | 10.1109/JBHI.2024.3423698 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_JBHI_2024_3423698</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10587076</ieee_id><sourcerecordid>3076287231</sourcerecordid><originalsourceid>FETCH-LOGICAL-c204t-a8f46967d61d6be38ce1e62c29c89899f606c38f9881ab308f250626e61c63d13</originalsourceid><addsrcrecordid>eNpNkF1LwzAUhoMoTuZ-gCCSS28689FlyaWbm5tUFHHXpUtPJdI0NemY89fbsk3MzQnJ874HHoSuKBlSStTd02SxHDLC4iGPGRdKnqALRoWMGCPy9HinKu6hQQifpD2yfVLiHPV4OySh4gLVM1uXbmeqD_wAUOM3MFXhvAYLVYMTyHzV_TUOP2ffxpofwInbgseJsWs8KZ3L8bx0W7wKHbesGvDWNE2Xfq1gY7PGaDx1tvYQgnHVJTorsjLA4DD7aDWfvU8XUfLyuJzeJ5FmJG6iTBaxUGKcC5qLNXCpgYJgmiktlVSqEERoLgslJc3WnMiCjYhgAgTVgueU99Htvrf27msDoUmtCRrKMqvAbULKyVgwOWa8Q-ke1d6F4KFIa29s5ncpJWnnOu1cp53r9OC6zdwc6jdrC_lf4mi2Ba73gAGAf4UjOW4381-F8IM6</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3076287231</pqid></control><display><type>article</type><title>Employing Deep Reinforcement Learning to Maximize Lower Limb Blood Flow Using Intermittent Pneumatic Compression</title><source>IEEE Electronic Library (IEL)</source><creator>Santelices, Iara B. ; Landry, Cederick ; Arami, Arash ; Peterson, Sean D.</creator><creatorcontrib>Santelices, Iara B. ; Landry, Cederick ; Arami, Arash ; Peterson, Sean D.</creatorcontrib><description><![CDATA[Intermittent pneumatic compression (IPC) systems apply external pressure to the lower limbs and enhance peripheral blood flow. We previously introduced a cardiac-gated compression system that enhanced arterial blood velocity (BV) in the lower limb compared to fixed compression timing (CT) for seated and standing subjects. However, these pilot studies found that the CT that maximized BV was not constant across individuals and could change over time. Current CT modelling methods for IPC are limited to predictions for a single day and one heartbeat ahead. However, IPC therapy for may span weeks or longer, the BV response to compression can vary with physiological state, and the best CT for eliciting the desired physiological outcome may change, even for the same individual. We propose that a deep reinforcement learning (DRL) algorithm can learn and adaptively modify CT to achieve a selected outcome using IPC. Herein, we target maximizing lower limb arterial BV as the desired outcome and build participant-specific simulated lower limb environments for 6 participants. We show that DRL can adaptively learn the CT for IPC that maximized arterial BV. Compared to previous work, the DRL agent achieves 98% <inline-formula><tex-math notation="LaTeX">\pm</tex-math></inline-formula> 2 of the resultant blood flow and is faster at maximizing BV; the DRL agent can learn an "optimal" policy in 15 minutes <inline-formula><tex-math notation="LaTeX">\pm</tex-math></inline-formula> 2 on average and can adapt on the fly. Given a desired objective, we posit that the proposed DRL agent can be implemented in IPC systems to rapidly learn the (potentially time-varying) "optimal" CT with a human-in-the-loop.]]></description><identifier>ISSN: 2168-2194</identifier><identifier>ISSN: 2168-2208</identifier><identifier>EISSN: 2168-2208</identifier><identifier>DOI: 10.1109/JBHI.2024.3423698</identifier><identifier>PMID: 38968016</identifier><identifier>CODEN: IJBHA9</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adult ; Algorithms ; Blood ; Blood flow ; Blood Flow Velocity - physiology ; cardiac gating ; Deep Learning ; deep reinforcement learning ; Electrocardiography ; Estimation ; Female ; Heart beat ; Humans ; Intermittent pneumatic compression ; Intermittent Pneumatic Compression Devices ; Lower Extremity - blood supply ; Lower Extremity - physiology ; Male ; Physiology ; Predictive models ; Young Adult</subject><ispartof>IEEE journal of biomedical and health informatics, 2024-10, Vol.28 (10), p.6193-6200</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c204t-a8f46967d61d6be38ce1e62c29c89899f606c38f9881ab308f250626e61c63d13</cites><orcidid>0000-0001-5941-4572 ; 0000-0001-8746-2491 ; 0000-0001-7609-6553 ; 0009-0004-9843-8349</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10587076$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10587076$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38968016$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Santelices, Iara B.</creatorcontrib><creatorcontrib>Landry, Cederick</creatorcontrib><creatorcontrib>Arami, Arash</creatorcontrib><creatorcontrib>Peterson, Sean D.</creatorcontrib><title>Employing Deep Reinforcement Learning to Maximize Lower Limb Blood Flow Using Intermittent Pneumatic Compression</title><title>IEEE journal of biomedical and health informatics</title><addtitle>JBHI</addtitle><addtitle>IEEE J Biomed Health Inform</addtitle><description><![CDATA[Intermittent pneumatic compression (IPC) systems apply external pressure to the lower limbs and enhance peripheral blood flow. We previously introduced a cardiac-gated compression system that enhanced arterial blood velocity (BV) in the lower limb compared to fixed compression timing (CT) for seated and standing subjects. However, these pilot studies found that the CT that maximized BV was not constant across individuals and could change over time. Current CT modelling methods for IPC are limited to predictions for a single day and one heartbeat ahead. However, IPC therapy for may span weeks or longer, the BV response to compression can vary with physiological state, and the best CT for eliciting the desired physiological outcome may change, even for the same individual. We propose that a deep reinforcement learning (DRL) algorithm can learn and adaptively modify CT to achieve a selected outcome using IPC. Herein, we target maximizing lower limb arterial BV as the desired outcome and build participant-specific simulated lower limb environments for 6 participants. We show that DRL can adaptively learn the CT for IPC that maximized arterial BV. Compared to previous work, the DRL agent achieves 98% <inline-formula><tex-math notation="LaTeX">\pm</tex-math></inline-formula> 2 of the resultant blood flow and is faster at maximizing BV; the DRL agent can learn an "optimal" policy in 15 minutes <inline-formula><tex-math notation="LaTeX">\pm</tex-math></inline-formula> 2 on average and can adapt on the fly. Given a desired objective, we posit that the proposed DRL agent can be implemented in IPC systems to rapidly learn the (potentially time-varying) "optimal" CT with a human-in-the-loop.]]></description><subject>Adult</subject><subject>Algorithms</subject><subject>Blood</subject><subject>Blood flow</subject><subject>Blood Flow Velocity - physiology</subject><subject>cardiac gating</subject><subject>Deep Learning</subject><subject>deep reinforcement learning</subject><subject>Electrocardiography</subject><subject>Estimation</subject><subject>Female</subject><subject>Heart beat</subject><subject>Humans</subject><subject>Intermittent pneumatic compression</subject><subject>Intermittent Pneumatic Compression Devices</subject><subject>Lower Extremity - blood supply</subject><subject>Lower Extremity - physiology</subject><subject>Male</subject><subject>Physiology</subject><subject>Predictive models</subject><subject>Young Adult</subject><issn>2168-2194</issn><issn>2168-2208</issn><issn>2168-2208</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpNkF1LwzAUhoMoTuZ-gCCSS28689FlyaWbm5tUFHHXpUtPJdI0NemY89fbsk3MzQnJ874HHoSuKBlSStTd02SxHDLC4iGPGRdKnqALRoWMGCPy9HinKu6hQQifpD2yfVLiHPV4OySh4gLVM1uXbmeqD_wAUOM3MFXhvAYLVYMTyHzV_TUOP2ffxpofwInbgseJsWs8KZ3L8bx0W7wKHbesGvDWNE2Xfq1gY7PGaDx1tvYQgnHVJTorsjLA4DD7aDWfvU8XUfLyuJzeJ5FmJG6iTBaxUGKcC5qLNXCpgYJgmiktlVSqEERoLgslJc3WnMiCjYhgAgTVgueU99Htvrf27msDoUmtCRrKMqvAbULKyVgwOWa8Q-ke1d6F4KFIa29s5ncpJWnnOu1cp53r9OC6zdwc6jdrC_lf4mi2Ba73gAGAf4UjOW4381-F8IM6</recordid><startdate>202410</startdate><enddate>202410</enddate><creator>Santelices, Iara B.</creator><creator>Landry, Cederick</creator><creator>Arami, Arash</creator><creator>Peterson, Sean D.</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5941-4572</orcidid><orcidid>https://orcid.org/0000-0001-8746-2491</orcidid><orcidid>https://orcid.org/0000-0001-7609-6553</orcidid><orcidid>https://orcid.org/0009-0004-9843-8349</orcidid></search><sort><creationdate>202410</creationdate><title>Employing Deep Reinforcement Learning to Maximize Lower Limb Blood Flow Using Intermittent Pneumatic Compression</title><author>Santelices, Iara B. ; Landry, Cederick ; Arami, Arash ; Peterson, Sean D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c204t-a8f46967d61d6be38ce1e62c29c89899f606c38f9881ab308f250626e61c63d13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adult</topic><topic>Algorithms</topic><topic>Blood</topic><topic>Blood flow</topic><topic>Blood Flow Velocity - physiology</topic><topic>cardiac gating</topic><topic>Deep Learning</topic><topic>deep reinforcement learning</topic><topic>Electrocardiography</topic><topic>Estimation</topic><topic>Female</topic><topic>Heart beat</topic><topic>Humans</topic><topic>Intermittent pneumatic compression</topic><topic>Intermittent Pneumatic Compression Devices</topic><topic>Lower Extremity - blood supply</topic><topic>Lower Extremity - physiology</topic><topic>Male</topic><topic>Physiology</topic><topic>Predictive models</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Santelices, Iara B.</creatorcontrib><creatorcontrib>Landry, Cederick</creatorcontrib><creatorcontrib>Arami, Arash</creatorcontrib><creatorcontrib>Peterson, Sean D.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE journal of biomedical and health informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Santelices, Iara B.</au><au>Landry, Cederick</au><au>Arami, Arash</au><au>Peterson, Sean D.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Employing Deep Reinforcement Learning to Maximize Lower Limb Blood Flow Using Intermittent Pneumatic Compression</atitle><jtitle>IEEE journal of biomedical and health informatics</jtitle><stitle>JBHI</stitle><addtitle>IEEE J Biomed Health Inform</addtitle><date>2024-10</date><risdate>2024</risdate><volume>28</volume><issue>10</issue><spage>6193</spage><epage>6200</epage><pages>6193-6200</pages><issn>2168-2194</issn><issn>2168-2208</issn><eissn>2168-2208</eissn><coden>IJBHA9</coden><abstract><![CDATA[Intermittent pneumatic compression (IPC) systems apply external pressure to the lower limbs and enhance peripheral blood flow. We previously introduced a cardiac-gated compression system that enhanced arterial blood velocity (BV) in the lower limb compared to fixed compression timing (CT) for seated and standing subjects. However, these pilot studies found that the CT that maximized BV was not constant across individuals and could change over time. Current CT modelling methods for IPC are limited to predictions for a single day and one heartbeat ahead. However, IPC therapy for may span weeks or longer, the BV response to compression can vary with physiological state, and the best CT for eliciting the desired physiological outcome may change, even for the same individual. We propose that a deep reinforcement learning (DRL) algorithm can learn and adaptively modify CT to achieve a selected outcome using IPC. Herein, we target maximizing lower limb arterial BV as the desired outcome and build participant-specific simulated lower limb environments for 6 participants. We show that DRL can adaptively learn the CT for IPC that maximized arterial BV. Compared to previous work, the DRL agent achieves 98% <inline-formula><tex-math notation="LaTeX">\pm</tex-math></inline-formula> 2 of the resultant blood flow and is faster at maximizing BV; the DRL agent can learn an "optimal" policy in 15 minutes <inline-formula><tex-math notation="LaTeX">\pm</tex-math></inline-formula> 2 on average and can adapt on the fly. Given a desired objective, we posit that the proposed DRL agent can be implemented in IPC systems to rapidly learn the (potentially time-varying) "optimal" CT with a human-in-the-loop.]]></abstract><cop>United States</cop><pub>IEEE</pub><pmid>38968016</pmid><doi>10.1109/JBHI.2024.3423698</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0001-5941-4572</orcidid><orcidid>https://orcid.org/0000-0001-8746-2491</orcidid><orcidid>https://orcid.org/0000-0001-7609-6553</orcidid><orcidid>https://orcid.org/0009-0004-9843-8349</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2168-2194 |
ispartof | IEEE journal of biomedical and health informatics, 2024-10, Vol.28 (10), p.6193-6200 |
issn | 2168-2194 2168-2208 2168-2208 |
language | eng |
recordid | cdi_crossref_primary_10_1109_JBHI_2024_3423698 |
source | IEEE Electronic Library (IEL) |
subjects | Adult Algorithms Blood Blood flow Blood Flow Velocity - physiology cardiac gating Deep Learning deep reinforcement learning Electrocardiography Estimation Female Heart beat Humans Intermittent pneumatic compression Intermittent Pneumatic Compression Devices Lower Extremity - blood supply Lower Extremity - physiology Male Physiology Predictive models Young Adult |
title | Employing Deep Reinforcement Learning to Maximize Lower Limb Blood Flow Using Intermittent Pneumatic Compression |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T21%3A17%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Employing%20Deep%20Reinforcement%20Learning%20to%20Maximize%20Lower%20Limb%20Blood%20Flow%20Using%20Intermittent%20Pneumatic%20Compression&rft.jtitle=IEEE%20journal%20of%20biomedical%20and%20health%20informatics&rft.au=Santelices,%20Iara%20B.&rft.date=2024-10&rft.volume=28&rft.issue=10&rft.spage=6193&rft.epage=6200&rft.pages=6193-6200&rft.issn=2168-2194&rft.eissn=2168-2208&rft.coden=IJBHA9&rft_id=info:doi/10.1109/JBHI.2024.3423698&rft_dat=%3Cproquest_RIE%3E3076287231%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3076287231&rft_id=info:pmid/38968016&rft_ieee_id=10587076&rfr_iscdi=true |