Optimal Control and Replacement with State-Dependent Failure Rate: Dynamic Programming

A class of stochastic control problems where the payoff depends on the running maximum of a diffusion process is described. The controller must make two kinds of decision: first, he must choose a work rate (this decision determines the rate of profit as well as the proximity of failure), and second,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Annals of applied probability 1993-05, Vol.3 (2), p.364-379
Hauptverfasser: Heinricher, Arthur C., Stockbridge, Richard H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 379
container_issue 2
container_start_page 364
container_title The Annals of applied probability
container_volume 3
creator Heinricher, Arthur C.
Stockbridge, Richard H.
description A class of stochastic control problems where the payoff depends on the running maximum of a diffusion process is described. The controller must make two kinds of decision: first, he must choose a work rate (this decision determines the rate of profit as well as the proximity of failure), and second, he must decide when to replace a deteriorated system with a new one. Preventive replacement is a realistic option if the cost for replacement after failure is larger than the cost of a preventive replacement. We focus on the profit and replacement cost for a single work cycle and solve the problem in two stages. First, the optimal feedback control (work rate) is determined by maximizing the payoff during a single excursion of a controlled diffusion away from the running maximum. This step involves the solution of the Hamilton-Jacobi-Bellman (HJB) partial differential equation. The second step is to determine the optimal replacement set. The assumption that failure occurs only on the set where the state is increasing implies that replacement is optimal only on this set. This leads to a simple formula for the optimal replacement level in terms of the value function.
doi_str_mv 10.1214/aoap/1177005429
format Article
fullrecord <record><control><sourceid>jstor_proje</sourceid><recordid>TN_cdi_projecteuclid_primary_oai_CULeuclid_euclid_aoap_1177005429</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><jstor_id>2959637</jstor_id><sourcerecordid>2959637</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2069-2a04c84ca01c7740c0836b552b012591646404b9d0898e5ecfa7c3e14701b2cd3</originalsourceid><addsrcrecordid>eNptkMFLwzAYxYMoOKdnLx7yD9R-SZOm8TY6p8JgMp3XkqbZzGibkmbI_ns7NubF04PH93u87yF0T-CRUMJi5VQXEyIEAGdUXqARJWkWZSIRl2hEgEPEScqu0U3fbwFAMilG6GvRBduoGueuDd7VWLUVXpquVto0pg34x4Zv_BFUMNHUdKatDuZM2XrnDV4O9hOe7lvVWI3fvdt41TS23dyiq7Wqe3N30jFazZ4_89dovnh5yyfzSFNIZUQVMJ0xrYBoIRhoyJK05JyWQCiXQ92UAStlBZnMDDd6rYRODGECSEl1lYzR5Jjbebc1Opidrm1VdH74ye8Lp2yRr-Yn9ySHoYq_oYaM-Jihvet7b9ZnnEBxmPYf4uFIbPvg_PmcSi7TRCS_MEd2nQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Optimal Control and Replacement with State-Dependent Failure Rate: Dynamic Programming</title><source>JSTOR Mathematics &amp; Statistics</source><source>JSTOR Archive Collection A-Z Listing</source><source>EZB-FREE-00999 freely available EZB journals</source><source>Project Euclid Complete</source><creator>Heinricher, Arthur C. ; Stockbridge, Richard H.</creator><creatorcontrib>Heinricher, Arthur C. ; Stockbridge, Richard H.</creatorcontrib><description>A class of stochastic control problems where the payoff depends on the running maximum of a diffusion process is described. The controller must make two kinds of decision: first, he must choose a work rate (this decision determines the rate of profit as well as the proximity of failure), and second, he must decide when to replace a deteriorated system with a new one. Preventive replacement is a realistic option if the cost for replacement after failure is larger than the cost of a preventive replacement. We focus on the profit and replacement cost for a single work cycle and solve the problem in two stages. First, the optimal feedback control (work rate) is determined by maximizing the payoff during a single excursion of a controlled diffusion away from the running maximum. This step involves the solution of the Hamilton-Jacobi-Bellman (HJB) partial differential equation. The second step is to determine the optimal replacement set. The assumption that failure occurs only on the set where the state is increasing implies that replacement is optimal only on this set. This leads to a simple formula for the optimal replacement level in terms of the value function.</description><identifier>ISSN: 1050-5164</identifier><identifier>EISSN: 2168-8737</identifier><identifier>DOI: 10.1214/aoap/1177005429</identifier><language>eng</language><publisher>Institute of Mathematical Statistics</publisher><subject>49B60 ; 49C20 ; 93E20 ; Boundary conditions ; Control theory ; Controlled diffusion ; Dynamic programming ; Feedback control ; Integrands ; Markov processes ; Optimal control ; optimal replacement ; Polynomials ; Replacement value ; running maximum ; state dependent failure ; Stochastic models ; stochastic wear models</subject><ispartof>The Annals of applied probability, 1993-05, Vol.3 (2), p.364-379</ispartof><rights>Copyright 1993 Institute of Mathematical Statistics</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2069-2a04c84ca01c7740c0836b552b012591646404b9d0898e5ecfa7c3e14701b2cd3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/2959637$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/2959637$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>230,314,780,784,803,832,885,926,27924,27925,58017,58021,58250,58254</link.rule.ids></links><search><creatorcontrib>Heinricher, Arthur C.</creatorcontrib><creatorcontrib>Stockbridge, Richard H.</creatorcontrib><title>Optimal Control and Replacement with State-Dependent Failure Rate: Dynamic Programming</title><title>The Annals of applied probability</title><description>A class of stochastic control problems where the payoff depends on the running maximum of a diffusion process is described. The controller must make two kinds of decision: first, he must choose a work rate (this decision determines the rate of profit as well as the proximity of failure), and second, he must decide when to replace a deteriorated system with a new one. Preventive replacement is a realistic option if the cost for replacement after failure is larger than the cost of a preventive replacement. We focus on the profit and replacement cost for a single work cycle and solve the problem in two stages. First, the optimal feedback control (work rate) is determined by maximizing the payoff during a single excursion of a controlled diffusion away from the running maximum. This step involves the solution of the Hamilton-Jacobi-Bellman (HJB) partial differential equation. The second step is to determine the optimal replacement set. The assumption that failure occurs only on the set where the state is increasing implies that replacement is optimal only on this set. This leads to a simple formula for the optimal replacement level in terms of the value function.</description><subject>49B60</subject><subject>49C20</subject><subject>93E20</subject><subject>Boundary conditions</subject><subject>Control theory</subject><subject>Controlled diffusion</subject><subject>Dynamic programming</subject><subject>Feedback control</subject><subject>Integrands</subject><subject>Markov processes</subject><subject>Optimal control</subject><subject>optimal replacement</subject><subject>Polynomials</subject><subject>Replacement value</subject><subject>running maximum</subject><subject>state dependent failure</subject><subject>Stochastic models</subject><subject>stochastic wear models</subject><issn>1050-5164</issn><issn>2168-8737</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1993</creationdate><recordtype>article</recordtype><recordid>eNptkMFLwzAYxYMoOKdnLx7yD9R-SZOm8TY6p8JgMp3XkqbZzGibkmbI_ns7NubF04PH93u87yF0T-CRUMJi5VQXEyIEAGdUXqARJWkWZSIRl2hEgEPEScqu0U3fbwFAMilG6GvRBduoGueuDd7VWLUVXpquVto0pg34x4Zv_BFUMNHUdKatDuZM2XrnDV4O9hOe7lvVWI3fvdt41TS23dyiq7Wqe3N30jFazZ4_89dovnh5yyfzSFNIZUQVMJ0xrYBoIRhoyJK05JyWQCiXQ92UAStlBZnMDDd6rYRODGECSEl1lYzR5Jjbebc1Opidrm1VdH74ye8Lp2yRr-Yn9ySHoYq_oYaM-Jihvet7b9ZnnEBxmPYf4uFIbPvg_PmcSi7TRCS_MEd2nQ</recordid><startdate>19930501</startdate><enddate>19930501</enddate><creator>Heinricher, Arthur C.</creator><creator>Stockbridge, Richard H.</creator><general>Institute of Mathematical Statistics</general><general>The Institute of Mathematical Statistics</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>19930501</creationdate><title>Optimal Control and Replacement with State-Dependent Failure Rate: Dynamic Programming</title><author>Heinricher, Arthur C. ; Stockbridge, Richard H.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2069-2a04c84ca01c7740c0836b552b012591646404b9d0898e5ecfa7c3e14701b2cd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1993</creationdate><topic>49B60</topic><topic>49C20</topic><topic>93E20</topic><topic>Boundary conditions</topic><topic>Control theory</topic><topic>Controlled diffusion</topic><topic>Dynamic programming</topic><topic>Feedback control</topic><topic>Integrands</topic><topic>Markov processes</topic><topic>Optimal control</topic><topic>optimal replacement</topic><topic>Polynomials</topic><topic>Replacement value</topic><topic>running maximum</topic><topic>state dependent failure</topic><topic>Stochastic models</topic><topic>stochastic wear models</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Heinricher, Arthur C.</creatorcontrib><creatorcontrib>Stockbridge, Richard H.</creatorcontrib><collection>CrossRef</collection><jtitle>The Annals of applied probability</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Heinricher, Arthur C.</au><au>Stockbridge, Richard H.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optimal Control and Replacement with State-Dependent Failure Rate: Dynamic Programming</atitle><jtitle>The Annals of applied probability</jtitle><date>1993-05-01</date><risdate>1993</risdate><volume>3</volume><issue>2</issue><spage>364</spage><epage>379</epage><pages>364-379</pages><issn>1050-5164</issn><eissn>2168-8737</eissn><abstract>A class of stochastic control problems where the payoff depends on the running maximum of a diffusion process is described. The controller must make two kinds of decision: first, he must choose a work rate (this decision determines the rate of profit as well as the proximity of failure), and second, he must decide when to replace a deteriorated system with a new one. Preventive replacement is a realistic option if the cost for replacement after failure is larger than the cost of a preventive replacement. We focus on the profit and replacement cost for a single work cycle and solve the problem in two stages. First, the optimal feedback control (work rate) is determined by maximizing the payoff during a single excursion of a controlled diffusion away from the running maximum. This step involves the solution of the Hamilton-Jacobi-Bellman (HJB) partial differential equation. The second step is to determine the optimal replacement set. The assumption that failure occurs only on the set where the state is increasing implies that replacement is optimal only on this set. This leads to a simple formula for the optimal replacement level in terms of the value function.</abstract><pub>Institute of Mathematical Statistics</pub><doi>10.1214/aoap/1177005429</doi><tpages>16</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1050-5164
ispartof The Annals of applied probability, 1993-05, Vol.3 (2), p.364-379
issn 1050-5164
2168-8737
language eng
recordid cdi_projecteuclid_primary_oai_CULeuclid_euclid_aoap_1177005429
source JSTOR Mathematics & Statistics; JSTOR Archive Collection A-Z Listing; EZB-FREE-00999 freely available EZB journals; Project Euclid Complete
subjects 49B60
49C20
93E20
Boundary conditions
Control theory
Controlled diffusion
Dynamic programming
Feedback control
Integrands
Markov processes
Optimal control
optimal replacement
Polynomials
Replacement value
running maximum
state dependent failure
Stochastic models
stochastic wear models
title Optimal Control and Replacement with State-Dependent Failure Rate: Dynamic Programming
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T00%3A56%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_proje&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optimal%20Control%20and%20Replacement%20with%20State-Dependent%20Failure%20Rate:%20Dynamic%20Programming&rft.jtitle=The%20Annals%20of%20applied%20probability&rft.au=Heinricher,%20Arthur%20C.&rft.date=1993-05-01&rft.volume=3&rft.issue=2&rft.spage=364&rft.epage=379&rft.pages=364-379&rft.issn=1050-5164&rft.eissn=2168-8737&rft_id=info:doi/10.1214/aoap/1177005429&rft_dat=%3Cjstor_proje%3E2959637%3C/jstor_proje%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_jstor_id=2959637&rfr_iscdi=true