Optimal instructional policies based on a random-trial incremental model of learning
The random-trial incremental (RTI) model of human associative learning proposes that learning due to a trial where the association is presented proceeds incrementally; but with a certain probability, constant across trials, no learning occurs due to a trial. Based on RTI, identifying a policy for se...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on systems, man and cybernetics. Part A, Systems and humans man and cybernetics. Part A, Systems and humans, 2000-07, Vol.30 (4), p.490-494 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 494 |
---|---|
container_issue | 4 |
container_start_page | 490 |
container_title | IEEE transactions on systems, man and cybernetics. Part A, Systems and humans |
container_volume | 30 |
creator | Katsikopoulos, K.V. |
description | The random-trial incremental (RTI) model of human associative learning proposes that learning due to a trial where the association is presented proceeds incrementally; but with a certain probability, constant across trials, no learning occurs due to a trial. Based on RTI, identifying a policy for sequencing presentation trials of different associations for maximizing overall learning can be accomplished via a Markov decision process. For both finite and infinite horizons and a quite general structure of costs and rewards, a policy that on each trial presents an association that leads to the maximum expected immediate net reward is optimal. |
doi_str_mv | 10.1109/3468.852441 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_28515575</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>852441</ieee_id><sourcerecordid>29013437</sourcerecordid><originalsourceid>FETCH-LOGICAL-c402t-4593be7947a74e309a8c2cbf09b1d800f975d23b908f1b50834853d7a39f64d03</originalsourceid><addsrcrecordid>eNqN0TtPwzAQB3ALgUR5TGxMEQMMKMWPc2yPqOIlVepS5shxHOQqiYOdDnx7XFIxMACTz7qfTrr7I3RB8JwQrO4YFHIuOQUgB2hGOJc5BVocphpLlgNQcYxOYtxgTAAUzNB6NYyu023m-jiGrRmd79Nv8K0zzsas0tHWme8znQXd177Lx-C-uAm2s_2Y6s7Xts18k7VWh971b2foqNFttOf79xS9Pj6sF8_5cvX0srhf5gYwHXPgilVWKBBagGVYaWmoqRqsKlJLjBsleE1ZpbBsSMXTBiA5q4VmqimgxuwU3Uxzh-DftzaOZeeisW2re-u3sVQECiqI4Ele_yqpwoQBE39DydNZ_zNRFFJxViR49QNu_DakI8dSSo4FB8USup2QCT7GYJtyCCmW8FESXO6iLXfRllO0SV9O2llrv-W--QlEE5yG</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>885075493</pqid></control><display><type>article</type><title>Optimal instructional policies based on a random-trial incremental model of learning</title><source>IEEE Electronic Library (IEL)</source><creator>Katsikopoulos, K.V.</creator><creatorcontrib>Katsikopoulos, K.V.</creatorcontrib><description>The random-trial incremental (RTI) model of human associative learning proposes that learning due to a trial where the association is presented proceeds incrementally; but with a certain probability, constant across trials, no learning occurs due to a trial. Based on RTI, identifying a policy for sequencing presentation trials of different associations for maximizing overall learning can be accomplished via a Markov decision process. For both finite and infinite horizons and a quite general structure of costs and rewards, a policy that on each trial presents an association that leads to the maximum expected immediate net reward is optimal.</description><identifier>ISSN: 1083-4427</identifier><identifier>ISSN: 2168-2216</identifier><identifier>EISSN: 1558-2426</identifier><identifier>EISSN: 2168-2232</identifier><identifier>DOI: 10.1109/3468.852441</identifier><identifier>CODEN: ITSHFX</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Cost function ; Cybernetics ; Human ; Humans ; Industrial engineering ; Infinite horizon ; Learning ; Learning systems ; Markov processes ; Mathematical analysis ; Mathematical model ; Optimization ; Policies ; Predictive models ; Psychology ; Random variables ; Sequencing ; Testing</subject><ispartof>IEEE transactions on systems, man and cybernetics. Part A, Systems and humans, 2000-07, Vol.30 (4), p.490-494</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2000</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c402t-4593be7947a74e309a8c2cbf09b1d800f975d23b908f1b50834853d7a39f64d03</citedby><cites>FETCH-LOGICAL-c402t-4593be7947a74e309a8c2cbf09b1d800f975d23b908f1b50834853d7a39f64d03</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/852441$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/852441$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Katsikopoulos, K.V.</creatorcontrib><title>Optimal instructional policies based on a random-trial incremental model of learning</title><title>IEEE transactions on systems, man and cybernetics. Part A, Systems and humans</title><addtitle>TSMCA</addtitle><description>The random-trial incremental (RTI) model of human associative learning proposes that learning due to a trial where the association is presented proceeds incrementally; but with a certain probability, constant across trials, no learning occurs due to a trial. Based on RTI, identifying a policy for sequencing presentation trials of different associations for maximizing overall learning can be accomplished via a Markov decision process. For both finite and infinite horizons and a quite general structure of costs and rewards, a policy that on each trial presents an association that leads to the maximum expected immediate net reward is optimal.</description><subject>Cost function</subject><subject>Cybernetics</subject><subject>Human</subject><subject>Humans</subject><subject>Industrial engineering</subject><subject>Infinite horizon</subject><subject>Learning</subject><subject>Learning systems</subject><subject>Markov processes</subject><subject>Mathematical analysis</subject><subject>Mathematical model</subject><subject>Optimization</subject><subject>Policies</subject><subject>Predictive models</subject><subject>Psychology</subject><subject>Random variables</subject><subject>Sequencing</subject><subject>Testing</subject><issn>1083-4427</issn><issn>2168-2216</issn><issn>1558-2426</issn><issn>2168-2232</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2000</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNqN0TtPwzAQB3ALgUR5TGxMEQMMKMWPc2yPqOIlVepS5shxHOQqiYOdDnx7XFIxMACTz7qfTrr7I3RB8JwQrO4YFHIuOQUgB2hGOJc5BVocphpLlgNQcYxOYtxgTAAUzNB6NYyu023m-jiGrRmd79Nv8K0zzsas0tHWme8znQXd177Lx-C-uAm2s_2Y6s7Xts18k7VWh971b2foqNFttOf79xS9Pj6sF8_5cvX0srhf5gYwHXPgilVWKBBagGVYaWmoqRqsKlJLjBsleE1ZpbBsSMXTBiA5q4VmqimgxuwU3Uxzh-DftzaOZeeisW2re-u3sVQECiqI4Ele_yqpwoQBE39DydNZ_zNRFFJxViR49QNu_DakI8dSSo4FB8USup2QCT7GYJtyCCmW8FESXO6iLXfRllO0SV9O2llrv-W--QlEE5yG</recordid><startdate>20000701</startdate><enddate>20000701</enddate><creator>Katsikopoulos, K.V.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>H8D</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>F28</scope></search><sort><creationdate>20000701</creationdate><title>Optimal instructional policies based on a random-trial incremental model of learning</title><author>Katsikopoulos, K.V.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c402t-4593be7947a74e309a8c2cbf09b1d800f975d23b908f1b50834853d7a39f64d03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2000</creationdate><topic>Cost function</topic><topic>Cybernetics</topic><topic>Human</topic><topic>Humans</topic><topic>Industrial engineering</topic><topic>Infinite horizon</topic><topic>Learning</topic><topic>Learning systems</topic><topic>Markov processes</topic><topic>Mathematical analysis</topic><topic>Mathematical model</topic><topic>Optimization</topic><topic>Policies</topic><topic>Predictive models</topic><topic>Psychology</topic><topic>Random variables</topic><topic>Sequencing</topic><topic>Testing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Katsikopoulos, K.V.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><jtitle>IEEE transactions on systems, man and cybernetics. Part A, Systems and humans</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Katsikopoulos, K.V.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optimal instructional policies based on a random-trial incremental model of learning</atitle><jtitle>IEEE transactions on systems, man and cybernetics. Part A, Systems and humans</jtitle><stitle>TSMCA</stitle><date>2000-07-01</date><risdate>2000</risdate><volume>30</volume><issue>4</issue><spage>490</spage><epage>494</epage><pages>490-494</pages><issn>1083-4427</issn><issn>2168-2216</issn><eissn>1558-2426</eissn><eissn>2168-2232</eissn><coden>ITSHFX</coden><abstract>The random-trial incremental (RTI) model of human associative learning proposes that learning due to a trial where the association is presented proceeds incrementally; but with a certain probability, constant across trials, no learning occurs due to a trial. Based on RTI, identifying a policy for sequencing presentation trials of different associations for maximizing overall learning can be accomplished via a Markov decision process. For both finite and infinite horizons and a quite general structure of costs and rewards, a policy that on each trial presents an association that leads to the maximum expected immediate net reward is optimal.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/3468.852441</doi><tpages>5</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1083-4427 |
ispartof | IEEE transactions on systems, man and cybernetics. Part A, Systems and humans, 2000-07, Vol.30 (4), p.490-494 |
issn | 1083-4427 2168-2216 1558-2426 2168-2232 |
language | eng |
recordid | cdi_proquest_miscellaneous_28515575 |
source | IEEE Electronic Library (IEL) |
subjects | Cost function Cybernetics Human Humans Industrial engineering Infinite horizon Learning Learning systems Markov processes Mathematical analysis Mathematical model Optimization Policies Predictive models Psychology Random variables Sequencing Testing |
title | Optimal instructional policies based on a random-trial incremental model of learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T21%3A53%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optimal%20instructional%20policies%20based%20on%20a%20random-trial%20incremental%20model%20of%20learning&rft.jtitle=IEEE%20transactions%20on%20systems,%20man%20and%20cybernetics.%20Part%20A,%20Systems%20and%20humans&rft.au=Katsikopoulos,%20K.V.&rft.date=2000-07-01&rft.volume=30&rft.issue=4&rft.spage=490&rft.epage=494&rft.pages=490-494&rft.issn=1083-4427&rft.eissn=1558-2426&rft.coden=ITSHFX&rft_id=info:doi/10.1109/3468.852441&rft_dat=%3Cproquest_RIE%3E29013437%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=885075493&rft_id=info:pmid/&rft_ieee_id=852441&rfr_iscdi=true |