Gantry Work Cell Scheduling through Reinforcement Learning with Knowledge-guided Reward Setting

In this paper, a manufacturing work cell utilizing gantries to move between machines for loading and unloading materials/parts is considered. The production performance of the gantry work cell highly depends on the gantry movements in real operation. This paper formulates the gantry scheduling probl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2018-01, Vol.6, p.14699-14709
Hauptverfasser: Ou, Xinyan, Chang, Qing, Arinez, Jorge, Zou, Jing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 14709
container_issue
container_start_page 14699
container_title IEEE access
container_volume 6
creator Ou, Xinyan
Chang, Qing
Arinez, Jorge
Zou, Jing
description In this paper, a manufacturing work cell utilizing gantries to move between machines for loading and unloading materials/parts is considered. The production performance of the gantry work cell highly depends on the gantry movements in real operation. This paper formulates the gantry scheduling problem as a reinforcement learning problem, in which an optimal gantry moving policy is solved to maximize the system output. The problem is carried out by the Q-learning algorithm. The gantry system is analyzed and its real-time performance is evaluated by permanent production loss and production loss risk, which provide a theoretical base for defining reward function in the Q-learning algorithm. A numerical study is performed to demonstrate the effectiveness of the proposed policy by comparing with the first-come-first-served policy.
doi_str_mv 10.1109/ACCESS.2018.2800641
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_ACCESS_2018_2800641</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8283823</ieee_id><doaj_id>oai_doaj_org_article_e026a721918c4b9d9f116e8eb7804cea</doaj_id><sourcerecordid>2455907277</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-55b1030241f18b7fea8bca2c4f7057593d44fd1fd05913d3ba882400847582f3</originalsourceid><addsrcrecordid>eNpNUU1LxDAQLaLgov6CvRQ8d81nkx6l-IULgit4DGky6XatjaYpi__erBVxLjPMvPdmhpdlS4xWGKPq6rqubzabFUFYrohEqGT4KFsQXFYF5bQ8_lefZhfjuEMpZGpxscjUnR5i-MpffXjLa-j7fGO2YKe-G9o8boOf2m3-DN3gfDDwDkPM16DDcBjvu7jNHwe_78G2ULRTZ8Em8F4Hm28gxgQ6z06c7ke4-M1n2cvtzUt9X6yf7h7q63VhGJKx4LzBiCLCsMOyEQ60bIwmhjmBuOAVtYw5i51FvMLU0kZLSVh6gwkuiaNn2cMsa73eqY_Qvevwpbzu1E_Dh1bpEDvTgwJESi0IrrA0rKls5TAuQUIjJGIGdNK6nLU-gv-cYIxq56cwpOsVYZxXSBAhEorOKBP8OAZwf1sxUgdf1OyLOviifn1JrOXM6gDgjyGJpJJQ-g3LEIit</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455907277</pqid></control><display><type>article</type><title>Gantry Work Cell Scheduling through Reinforcement Learning with Knowledge-guided Reward Setting</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Ou, Xinyan ; Chang, Qing ; Arinez, Jorge ; Zou, Jing</creator><creatorcontrib>Ou, Xinyan ; Chang, Qing ; Arinez, Jorge ; Zou, Jing</creatorcontrib><description>In this paper, a manufacturing work cell utilizing gantries to move between machines for loading and unloading materials/parts is considered. The production performance of the gantry work cell highly depends on the gantry movements in real operation. This paper formulates the gantry scheduling problem as a reinforcement learning problem, in which an optimal gantry moving policy is solved to maximize the system output. The problem is carried out by the Q-learning algorithm. The gantry system is analyzed and its real-time performance is evaluated by permanent production loss and production loss risk, which provide a theoretical base for defining reward function in the Q-learning algorithm. A numerical study is performed to demonstrate the effectiveness of the proposed policy by comparing with the first-come-first-served policy.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2018.2800641</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Equivalent serial line ; Gantry cranes ; gantry scheduling ; Job shop scheduling ; Learning (artificial intelligence) ; Machine learning ; Performance evaluation ; production loss risk ; Q-learning ; Real-time systems ; reinforcement learning ; Scheduling ; Service robots</subject><ispartof>IEEE access, 2018-01, Vol.6, p.14699-14709</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-55b1030241f18b7fea8bca2c4f7057593d44fd1fd05913d3ba882400847582f3</citedby><cites>FETCH-LOGICAL-c408t-55b1030241f18b7fea8bca2c4f7057593d44fd1fd05913d3ba882400847582f3</cites><orcidid>0000-0003-3744-1371</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8283823$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>315,782,786,866,2104,27640,27931,27932,54940</link.rule.ids></links><search><creatorcontrib>Ou, Xinyan</creatorcontrib><creatorcontrib>Chang, Qing</creatorcontrib><creatorcontrib>Arinez, Jorge</creatorcontrib><creatorcontrib>Zou, Jing</creatorcontrib><title>Gantry Work Cell Scheduling through Reinforcement Learning with Knowledge-guided Reward Setting</title><title>IEEE access</title><addtitle>Access</addtitle><description>In this paper, a manufacturing work cell utilizing gantries to move between machines for loading and unloading materials/parts is considered. The production performance of the gantry work cell highly depends on the gantry movements in real operation. This paper formulates the gantry scheduling problem as a reinforcement learning problem, in which an optimal gantry moving policy is solved to maximize the system output. The problem is carried out by the Q-learning algorithm. The gantry system is analyzed and its real-time performance is evaluated by permanent production loss and production loss risk, which provide a theoretical base for defining reward function in the Q-learning algorithm. A numerical study is performed to demonstrate the effectiveness of the proposed policy by comparing with the first-come-first-served policy.</description><subject>Algorithms</subject><subject>Equivalent serial line</subject><subject>Gantry cranes</subject><subject>gantry scheduling</subject><subject>Job shop scheduling</subject><subject>Learning (artificial intelligence)</subject><subject>Machine learning</subject><subject>Performance evaluation</subject><subject>production loss risk</subject><subject>Q-learning</subject><subject>Real-time systems</subject><subject>reinforcement learning</subject><subject>Scheduling</subject><subject>Service robots</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1LxDAQLaLgov6CvRQ8d81nkx6l-IULgit4DGky6XatjaYpi__erBVxLjPMvPdmhpdlS4xWGKPq6rqubzabFUFYrohEqGT4KFsQXFYF5bQ8_lefZhfjuEMpZGpxscjUnR5i-MpffXjLa-j7fGO2YKe-G9o8boOf2m3-DN3gfDDwDkPM16DDcBjvu7jNHwe_78G2ULRTZ8Em8F4Hm28gxgQ6z06c7ke4-M1n2cvtzUt9X6yf7h7q63VhGJKx4LzBiCLCsMOyEQ60bIwmhjmBuOAVtYw5i51FvMLU0kZLSVh6gwkuiaNn2cMsa73eqY_Qvevwpbzu1E_Dh1bpEDvTgwJESi0IrrA0rKls5TAuQUIjJGIGdNK6nLU-gv-cYIxq56cwpOsVYZxXSBAhEorOKBP8OAZwf1sxUgdf1OyLOviifn1JrOXM6gDgjyGJpJJQ-g3LEIit</recordid><startdate>20180101</startdate><enddate>20180101</enddate><creator>Ou, Xinyan</creator><creator>Chang, Qing</creator><creator>Arinez, Jorge</creator><creator>Zou, Jing</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-3744-1371</orcidid></search><sort><creationdate>20180101</creationdate><title>Gantry Work Cell Scheduling through Reinforcement Learning with Knowledge-guided Reward Setting</title><author>Ou, Xinyan ; Chang, Qing ; Arinez, Jorge ; Zou, Jing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-55b1030241f18b7fea8bca2c4f7057593d44fd1fd05913d3ba882400847582f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Algorithms</topic><topic>Equivalent serial line</topic><topic>Gantry cranes</topic><topic>gantry scheduling</topic><topic>Job shop scheduling</topic><topic>Learning (artificial intelligence)</topic><topic>Machine learning</topic><topic>Performance evaluation</topic><topic>production loss risk</topic><topic>Q-learning</topic><topic>Real-time systems</topic><topic>reinforcement learning</topic><topic>Scheduling</topic><topic>Service robots</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ou, Xinyan</creatorcontrib><creatorcontrib>Chang, Qing</creatorcontrib><creatorcontrib>Arinez, Jorge</creatorcontrib><creatorcontrib>Zou, Jing</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ou, Xinyan</au><au>Chang, Qing</au><au>Arinez, Jorge</au><au>Zou, Jing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Gantry Work Cell Scheduling through Reinforcement Learning with Knowledge-guided Reward Setting</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2018-01-01</date><risdate>2018</risdate><volume>6</volume><spage>14699</spage><epage>14709</epage><pages>14699-14709</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>In this paper, a manufacturing work cell utilizing gantries to move between machines for loading and unloading materials/parts is considered. The production performance of the gantry work cell highly depends on the gantry movements in real operation. This paper formulates the gantry scheduling problem as a reinforcement learning problem, in which an optimal gantry moving policy is solved to maximize the system output. The problem is carried out by the Q-learning algorithm. The gantry system is analyzed and its real-time performance is evaluated by permanent production loss and production loss risk, which provide a theoretical base for defining reward function in the Q-learning algorithm. A numerical study is performed to demonstrate the effectiveness of the proposed policy by comparing with the first-come-first-served policy.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2018.2800641</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-3744-1371</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2018-01, Vol.6, p.14699-14709
issn 2169-3536
2169-3536
language eng
recordid cdi_crossref_primary_10_1109_ACCESS_2018_2800641
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Algorithms
Equivalent serial line
Gantry cranes
gantry scheduling
Job shop scheduling
Learning (artificial intelligence)
Machine learning
Performance evaluation
production loss risk
Q-learning
Real-time systems
reinforcement learning
Scheduling
Service robots
title Gantry Work Cell Scheduling through Reinforcement Learning with Knowledge-guided Reward Setting
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-04T17%3A59%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Gantry%20Work%20Cell%20Scheduling%20through%20Reinforcement%20Learning%20with%20Knowledge-guided%20Reward%20Setting&rft.jtitle=IEEE%20access&rft.au=Ou,%20Xinyan&rft.date=2018-01-01&rft.volume=6&rft.spage=14699&rft.epage=14709&rft.pages=14699-14709&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2018.2800641&rft_dat=%3Cproquest_cross%3E2455907277%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455907277&rft_id=info:pmid/&rft_ieee_id=8283823&rft_doaj_id=oai_doaj_org_article_e026a721918c4b9d9f116e8eb7804cea&rfr_iscdi=true