Time-average optimal constrained semi-Markov decision processes

Optimal causal policies maximizing the time-average reward over a semi-Markov decision process (SMDP), subject to a hard constraint on a time-average cost, are considered. Rewards and costs depend on the state and action, and contain running as well as switching components. It is supposed that the s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Advances in applied probability 1986-06, Vol.18 (2), p.341-359
Hauptverfasser: Beutler, Frederick J., Ross, Keith W.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 359
container_issue 2
container_start_page 341
container_title Advances in applied probability
container_volume 18
creator Beutler, Frederick J.
Ross, Keith W.
description Optimal causal policies maximizing the time-average reward over a semi-Markov decision process (SMDP), subject to a hard constraint on a time-average cost, are considered. Rewards and costs depend on the state and action, and contain running as well as switching components. It is supposed that the state space of the SMDP is finite, and the action space compact metric. The policy determines an action at each transition point of the SMDP. Under an accessibility hypothesis, several notions of time average are equivalent. A Lagrange multiplier formulation involving a dynamic programming equation is utilized to relate the constrained optimization to an unconstrained optimization parametrized by the multiplier. This approach leads to a proof for the existence of a semi-simple optimal constrained policy. That is, there is at most one state for which the action is randomized between two possibilities; at all other states, an action is uniquely chosen for each state. Affine forms for the rewards, costs and transition probabilities further reduce the optimal constrained policy to ‘almost bang-bang’ form, in which the optimal policy is not randomized, and is bang-bang except perhaps at one state. Under the same assumptions, one can alternatively find an optimal constrained policy that is strictly bang-bang, but may be randomized at one state. Application is made to flow control of a birth-and-death process (e.g., an M/M/s queue); under certain monotonicity restrictions on the reward and cost structure the preceding results apply, and in addition there is a simple acceptance region.
doi_str_mv 10.2307/1427303
format Article
fullrecord <record><control><sourceid>jstor_cross</sourceid><recordid>TN_cdi_crossref_primary_10_2307_1427303</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><cupid>10_2307_1427303</cupid><jstor_id>1427303</jstor_id><sourcerecordid>1427303</sourcerecordid><originalsourceid>FETCH-LOGICAL-c247t-32229e58900fb5b2bcd2405d9618ab8e0e61312fe0c1706d3553df8b5fab29403</originalsourceid><addsrcrecordid>eNp9j0tLxDAUhYMoOI7iX-hCEBfVm1eTrkQGXzDiZlyXJL0ZOk6bktQB_72VKboQXF0ufJxzPkLOKVwzDuqGCqY48AMyo0LJvIBCHJIZANBcF0ofk5OUNuPLlYYZuV01LeZmh9GsMQv90LRmm7nQpSGapsM6S9g2-YuJ72GX1eia1IQu62NwmBKmU3LkzTbh2XTn5O3hfrV4ypevj8-Lu2XumFBDzhljJUpdAngrLbOuZgJkXRZUG6sRsKCcMo_gqIKi5lLy2msrvbGsFMDn5HKf62JIKaKv-jhOjZ8Vherbu5q8R_JiT_YmObP10XTj6B9cq0JIzn6xTRpC_Cftauo1rY1NvcZqEz5iN7r-Yb8A-2Jufg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Time-average optimal constrained semi-Markov decision processes</title><source>Jstor Complete Legacy</source><source>JSTOR Mathematics &amp; Statistics</source><creator>Beutler, Frederick J. ; Ross, Keith W.</creator><creatorcontrib>Beutler, Frederick J. ; Ross, Keith W.</creatorcontrib><description>Optimal causal policies maximizing the time-average reward over a semi-Markov decision process (SMDP), subject to a hard constraint on a time-average cost, are considered. Rewards and costs depend on the state and action, and contain running as well as switching components. It is supposed that the state space of the SMDP is finite, and the action space compact metric. The policy determines an action at each transition point of the SMDP. Under an accessibility hypothesis, several notions of time average are equivalent. A Lagrange multiplier formulation involving a dynamic programming equation is utilized to relate the constrained optimization to an unconstrained optimization parametrized by the multiplier. This approach leads to a proof for the existence of a semi-simple optimal constrained policy. That is, there is at most one state for which the action is randomized between two possibilities; at all other states, an action is uniquely chosen for each state. Affine forms for the rewards, costs and transition probabilities further reduce the optimal constrained policy to ‘almost bang-bang’ form, in which the optimal policy is not randomized, and is bang-bang except perhaps at one state. Under the same assumptions, one can alternatively find an optimal constrained policy that is strictly bang-bang, but may be randomized at one state. Application is made to flow control of a birth-and-death process (e.g., an M/M/s queue); under certain monotonicity restrictions on the reward and cost structure the preceding results apply, and in addition there is a simple acceptance region.</description><identifier>ISSN: 0001-8678</identifier><identifier>EISSN: 1475-6064</identifier><identifier>DOI: 10.2307/1427303</identifier><identifier>CODEN: AAPBBD</identifier><language>eng</language><publisher>Cambridge, UK: Cambridge University Press</publisher><subject>Applied sciences ; Average cost ; Bang bang controls ; Conditional probabilities ; Constrained optimization ; Exact sciences and technology ; Logical proofs ; Markov chains ; Markov processes ; Mathematical programming ; Operational research and scientific management ; Operational research. Management science ; Optimal policy ; Random allocation ; Transition probabilities</subject><ispartof>Advances in applied probability, 1986-06, Vol.18 (2), p.341-359</ispartof><rights>Copyright © Applied Probability Trust 1986</rights><rights>Copyright 1986 Applied Probability Trust</rights><rights>1986 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c247t-32229e58900fb5b2bcd2405d9618ab8e0e61312fe0c1706d3553df8b5fab29403</citedby><cites>FETCH-LOGICAL-c247t-32229e58900fb5b2bcd2405d9618ab8e0e61312fe0c1706d3553df8b5fab29403</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/1427303$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/1427303$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>314,776,780,799,828,27901,27902,57992,57996,58225,58229</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=8764532$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Beutler, Frederick J.</creatorcontrib><creatorcontrib>Ross, Keith W.</creatorcontrib><title>Time-average optimal constrained semi-Markov decision processes</title><title>Advances in applied probability</title><addtitle>Advances in Applied Probability</addtitle><description>Optimal causal policies maximizing the time-average reward over a semi-Markov decision process (SMDP), subject to a hard constraint on a time-average cost, are considered. Rewards and costs depend on the state and action, and contain running as well as switching components. It is supposed that the state space of the SMDP is finite, and the action space compact metric. The policy determines an action at each transition point of the SMDP. Under an accessibility hypothesis, several notions of time average are equivalent. A Lagrange multiplier formulation involving a dynamic programming equation is utilized to relate the constrained optimization to an unconstrained optimization parametrized by the multiplier. This approach leads to a proof for the existence of a semi-simple optimal constrained policy. That is, there is at most one state for which the action is randomized between two possibilities; at all other states, an action is uniquely chosen for each state. Affine forms for the rewards, costs and transition probabilities further reduce the optimal constrained policy to ‘almost bang-bang’ form, in which the optimal policy is not randomized, and is bang-bang except perhaps at one state. Under the same assumptions, one can alternatively find an optimal constrained policy that is strictly bang-bang, but may be randomized at one state. Application is made to flow control of a birth-and-death process (e.g., an M/M/s queue); under certain monotonicity restrictions on the reward and cost structure the preceding results apply, and in addition there is a simple acceptance region.</description><subject>Applied sciences</subject><subject>Average cost</subject><subject>Bang bang controls</subject><subject>Conditional probabilities</subject><subject>Constrained optimization</subject><subject>Exact sciences and technology</subject><subject>Logical proofs</subject><subject>Markov chains</subject><subject>Markov processes</subject><subject>Mathematical programming</subject><subject>Operational research and scientific management</subject><subject>Operational research. Management science</subject><subject>Optimal policy</subject><subject>Random allocation</subject><subject>Transition probabilities</subject><issn>0001-8678</issn><issn>1475-6064</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1986</creationdate><recordtype>article</recordtype><recordid>eNp9j0tLxDAUhYMoOI7iX-hCEBfVm1eTrkQGXzDiZlyXJL0ZOk6bktQB_72VKboQXF0ufJxzPkLOKVwzDuqGCqY48AMyo0LJvIBCHJIZANBcF0ofk5OUNuPLlYYZuV01LeZmh9GsMQv90LRmm7nQpSGapsM6S9g2-YuJ72GX1eia1IQu62NwmBKmU3LkzTbh2XTn5O3hfrV4ypevj8-Lu2XumFBDzhljJUpdAngrLbOuZgJkXRZUG6sRsKCcMo_gqIKi5lLy2msrvbGsFMDn5HKf62JIKaKv-jhOjZ8Vherbu5q8R_JiT_YmObP10XTj6B9cq0JIzn6xTRpC_Cftauo1rY1NvcZqEz5iN7r-Yb8A-2Jufg</recordid><startdate>19860601</startdate><enddate>19860601</enddate><creator>Beutler, Frederick J.</creator><creator>Ross, Keith W.</creator><general>Cambridge University Press</general><general>Applied Probability Trust</general><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>19860601</creationdate><title>Time-average optimal constrained semi-Markov decision processes</title><author>Beutler, Frederick J. ; Ross, Keith W.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c247t-32229e58900fb5b2bcd2405d9618ab8e0e61312fe0c1706d3553df8b5fab29403</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1986</creationdate><topic>Applied sciences</topic><topic>Average cost</topic><topic>Bang bang controls</topic><topic>Conditional probabilities</topic><topic>Constrained optimization</topic><topic>Exact sciences and technology</topic><topic>Logical proofs</topic><topic>Markov chains</topic><topic>Markov processes</topic><topic>Mathematical programming</topic><topic>Operational research and scientific management</topic><topic>Operational research. Management science</topic><topic>Optimal policy</topic><topic>Random allocation</topic><topic>Transition probabilities</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Beutler, Frederick J.</creatorcontrib><creatorcontrib>Ross, Keith W.</creatorcontrib><collection>Pascal-Francis</collection><collection>CrossRef</collection><jtitle>Advances in applied probability</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Beutler, Frederick J.</au><au>Ross, Keith W.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Time-average optimal constrained semi-Markov decision processes</atitle><jtitle>Advances in applied probability</jtitle><addtitle>Advances in Applied Probability</addtitle><date>1986-06-01</date><risdate>1986</risdate><volume>18</volume><issue>2</issue><spage>341</spage><epage>359</epage><pages>341-359</pages><issn>0001-8678</issn><eissn>1475-6064</eissn><coden>AAPBBD</coden><abstract>Optimal causal policies maximizing the time-average reward over a semi-Markov decision process (SMDP), subject to a hard constraint on a time-average cost, are considered. Rewards and costs depend on the state and action, and contain running as well as switching components. It is supposed that the state space of the SMDP is finite, and the action space compact metric. The policy determines an action at each transition point of the SMDP. Under an accessibility hypothesis, several notions of time average are equivalent. A Lagrange multiplier formulation involving a dynamic programming equation is utilized to relate the constrained optimization to an unconstrained optimization parametrized by the multiplier. This approach leads to a proof for the existence of a semi-simple optimal constrained policy. That is, there is at most one state for which the action is randomized between two possibilities; at all other states, an action is uniquely chosen for each state. Affine forms for the rewards, costs and transition probabilities further reduce the optimal constrained policy to ‘almost bang-bang’ form, in which the optimal policy is not randomized, and is bang-bang except perhaps at one state. Under the same assumptions, one can alternatively find an optimal constrained policy that is strictly bang-bang, but may be randomized at one state. Application is made to flow control of a birth-and-death process (e.g., an M/M/s queue); under certain monotonicity restrictions on the reward and cost structure the preceding results apply, and in addition there is a simple acceptance region.</abstract><cop>Cambridge, UK</cop><pub>Cambridge University Press</pub><doi>10.2307/1427303</doi><tpages>19</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0001-8678
ispartof Advances in applied probability, 1986-06, Vol.18 (2), p.341-359
issn 0001-8678
1475-6064
language eng
recordid cdi_crossref_primary_10_2307_1427303
source Jstor Complete Legacy; JSTOR Mathematics & Statistics
subjects Applied sciences
Average cost
Bang bang controls
Conditional probabilities
Constrained optimization
Exact sciences and technology
Logical proofs
Markov chains
Markov processes
Mathematical programming
Operational research and scientific management
Operational research. Management science
Optimal policy
Random allocation
Transition probabilities
title Time-average optimal constrained semi-Markov decision processes
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T20%3A41%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Time-average%20optimal%20constrained%20semi-Markov%20decision%20processes&rft.jtitle=Advances%20in%20applied%20probability&rft.au=Beutler,%20Frederick%20J.&rft.date=1986-06-01&rft.volume=18&rft.issue=2&rft.spage=341&rft.epage=359&rft.pages=341-359&rft.issn=0001-8678&rft.eissn=1475-6064&rft.coden=AAPBBD&rft_id=info:doi/10.2307/1427303&rft_dat=%3Cjstor_cross%3E1427303%3C/jstor_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_cupid=10_2307_1427303&rft_jstor_id=1427303&rfr_iscdi=true