Optimal Semantic-aware Sampling and Transmission in Energy Harvesting Systems Through the AoII

We study a real-time tracking problem in an energy harvesting status update system with a Markov source and an imperfect channel, considering both sampling and transmission costs. The problem primary challenge stems from the non-observability of the source due to the sampling cost. By using the age...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zakeri, Abolfazl, Moltafet, Mohammad, Codreanu, Marian
Format: Artikel
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zakeri, Abolfazl
Moltafet, Mohammad
Codreanu, Marian
description We study a real-time tracking problem in an energy harvesting status update system with a Markov source and an imperfect channel, considering both sampling and transmission costs. The problem primary challenge stems from the non-observability of the source due to the sampling cost. By using the age of incorrect information (AoII) as a semantic-aware performance metric, our main goal is to find an optimal policy that minimizes the time average AoII subject to an energy-causality constraint. To this end, a stochastic optimization problem is formulated and solved by modeling it as a partially observable Markov decision process (POMDP). More specifically, to solve the main problem, we use the notion of a belief state and cast the problem as a belief MDP problem. Then, for the perfect channel setup, we effectively truncate the corresponding belief space and solve the MDP problem using the relative value iteration method. For the general setup, a deep reinforcement learning policy is proposed. The simulation results show the efficacy of the derived policies in comparison to an AoI-optimal policy and an opportunistic baseline policy.
doi_str_mv 10.48550/arxiv.2304.00875
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2304_00875</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2304_00875</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-b8bc6fd48736a7e884f2acb9ad2da53e95cfb0070ab697ca8743e3a44a4f0eec3</originalsourceid><addsrcrecordid>eNotz71ugzAYhWEvHaq0F9CpvgGoi21sxihKG6RIGWAu-jAfYAkMsmla7r7Nz3SWV0d6CHl5Z7HQUrI38L_2HCeciZgxreQj-TrNix1hoAWO4BZrIvgBj7SAcR6s6yi4hpYeXBhtCHZy1Dq6d-i7lR7AnzEsl6pYw4JjoGXvp--up0uPdDvl-RN5aGEI-HzfDSk_9uXuEB1Pn_lue4wgVTKqdW3SthFa8RQUai3aBEydQZM0IDlm0rQ1Y4pBnWbKgFaCIwchQLQM0fANeb3dXoHV7P9Jfq0u0OoK5X-OpE_d</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Optimal Semantic-aware Sampling and Transmission in Energy Harvesting Systems Through the AoII</title><source>arXiv.org</source><creator>Zakeri, Abolfazl ; Moltafet, Mohammad ; Codreanu, Marian</creator><creatorcontrib>Zakeri, Abolfazl ; Moltafet, Mohammad ; Codreanu, Marian</creatorcontrib><description>We study a real-time tracking problem in an energy harvesting status update system with a Markov source and an imperfect channel, considering both sampling and transmission costs. The problem primary challenge stems from the non-observability of the source due to the sampling cost. By using the age of incorrect information (AoII) as a semantic-aware performance metric, our main goal is to find an optimal policy that minimizes the time average AoII subject to an energy-causality constraint. To this end, a stochastic optimization problem is formulated and solved by modeling it as a partially observable Markov decision process (POMDP). More specifically, to solve the main problem, we use the notion of a belief state and cast the problem as a belief MDP problem. Then, for the perfect channel setup, we effectively truncate the corresponding belief space and solve the MDP problem using the relative value iteration method. For the general setup, a deep reinforcement learning policy is proposed. The simulation results show the efficacy of the derived policies in comparison to an AoI-optimal policy and an opportunistic baseline policy.</description><identifier>DOI: 10.48550/arxiv.2304.00875</identifier><language>eng</language><creationdate>2023-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2304.00875$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2304.00875$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zakeri, Abolfazl</creatorcontrib><creatorcontrib>Moltafet, Mohammad</creatorcontrib><creatorcontrib>Codreanu, Marian</creatorcontrib><title>Optimal Semantic-aware Sampling and Transmission in Energy Harvesting Systems Through the AoII</title><description>We study a real-time tracking problem in an energy harvesting status update system with a Markov source and an imperfect channel, considering both sampling and transmission costs. The problem primary challenge stems from the non-observability of the source due to the sampling cost. By using the age of incorrect information (AoII) as a semantic-aware performance metric, our main goal is to find an optimal policy that minimizes the time average AoII subject to an energy-causality constraint. To this end, a stochastic optimization problem is formulated and solved by modeling it as a partially observable Markov decision process (POMDP). More specifically, to solve the main problem, we use the notion of a belief state and cast the problem as a belief MDP problem. Then, for the perfect channel setup, we effectively truncate the corresponding belief space and solve the MDP problem using the relative value iteration method. For the general setup, a deep reinforcement learning policy is proposed. The simulation results show the efficacy of the derived policies in comparison to an AoI-optimal policy and an opportunistic baseline policy.</description><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71ugzAYhWEvHaq0F9CpvgGoi21sxihKG6RIGWAu-jAfYAkMsmla7r7Nz3SWV0d6CHl5Z7HQUrI38L_2HCeciZgxreQj-TrNix1hoAWO4BZrIvgBj7SAcR6s6yi4hpYeXBhtCHZy1Dq6d-i7lR7AnzEsl6pYw4JjoGXvp--up0uPdDvl-RN5aGEI-HzfDSk_9uXuEB1Pn_lue4wgVTKqdW3SthFa8RQUai3aBEydQZM0IDlm0rQ1Y4pBnWbKgFaCIwchQLQM0fANeb3dXoHV7P9Jfq0u0OoK5X-OpE_d</recordid><startdate>20230403</startdate><enddate>20230403</enddate><creator>Zakeri, Abolfazl</creator><creator>Moltafet, Mohammad</creator><creator>Codreanu, Marian</creator><scope>GOX</scope></search><sort><creationdate>20230403</creationdate><title>Optimal Semantic-aware Sampling and Transmission in Energy Harvesting Systems Through the AoII</title><author>Zakeri, Abolfazl ; Moltafet, Mohammad ; Codreanu, Marian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-b8bc6fd48736a7e884f2acb9ad2da53e95cfb0070ab697ca8743e3a44a4f0eec3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Zakeri, Abolfazl</creatorcontrib><creatorcontrib>Moltafet, Mohammad</creatorcontrib><creatorcontrib>Codreanu, Marian</creatorcontrib><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zakeri, Abolfazl</au><au>Moltafet, Mohammad</au><au>Codreanu, Marian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optimal Semantic-aware Sampling and Transmission in Energy Harvesting Systems Through the AoII</atitle><date>2023-04-03</date><risdate>2023</risdate><abstract>We study a real-time tracking problem in an energy harvesting status update system with a Markov source and an imperfect channel, considering both sampling and transmission costs. The problem primary challenge stems from the non-observability of the source due to the sampling cost. By using the age of incorrect information (AoII) as a semantic-aware performance metric, our main goal is to find an optimal policy that minimizes the time average AoII subject to an energy-causality constraint. To this end, a stochastic optimization problem is formulated and solved by modeling it as a partially observable Markov decision process (POMDP). More specifically, to solve the main problem, we use the notion of a belief state and cast the problem as a belief MDP problem. Then, for the perfect channel setup, we effectively truncate the corresponding belief space and solve the MDP problem using the relative value iteration method. For the general setup, a deep reinforcement learning policy is proposed. The simulation results show the efficacy of the derived policies in comparison to an AoI-optimal policy and an opportunistic baseline policy.</abstract><doi>10.48550/arxiv.2304.00875</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2304.00875
ispartof
issn
language eng
recordid cdi_arxiv_primary_2304_00875
source arXiv.org
title Optimal Semantic-aware Sampling and Transmission in Energy Harvesting Systems Through the AoII
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T13%3A06%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optimal%20Semantic-aware%20Sampling%20and%20Transmission%20in%20Energy%20Harvesting%20Systems%20Through%20the%20AoII&rft.au=Zakeri,%20Abolfazl&rft.date=2023-04-03&rft_id=info:doi/10.48550/arxiv.2304.00875&rft_dat=%3Carxiv_GOX%3E2304_00875%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true