OPTIMAL PROCEDURE FOR AN N-STATE TESTING AND LEARNING PROCESS. II

The paper is a continuation of work on optimal strategies for presentation of items in an N-trial learning experiment. In AD-611 056 and AD-610 696 it was shown (under certain assumptions) that the following decision rule generated an optimal sequencing: in any trial present the item for which the p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Karush,W, Dear,R. E
Format: Report
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Karush,W
Dear,R. E
description The paper is a continuation of work on optimal strategies for presentation of items in an N-trial learning experiment. In AD-611 056 and AD-610 696 it was shown (under certain assumptions) that the following decision rule generated an optimal sequencing: in any trial present the item for which the probability of being in the learned state is least. In the present paper it is shown that this rule is optimal for a more general learning model than any considered earlier; the new model allows for the possibility that a subject may respond incorrectly to a test on an item, even though he 'knows' the item. (Author)
format Report
fullrecord <record><control><sourceid>dtic_1RU</sourceid><recordid>TN_cdi_dtic_stinet_AD0623770</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>AD0623770</sourcerecordid><originalsourceid>FETCH-dtic_stinet_AD06237703</originalsourceid><addsrcrecordid>eNrjZHD0Dwjx9HX0UQgI8nd2dQkNclVw8w9ScPRT8NMNDnEMcVUIcQ0O8fRzBwq5KPi4Ogb5gThg1cHBegqenjwMrGmJOcWpvFCam0HGzTXE2UM3pSQzOb64JDMvtSTe0cXAzMjY3NzAmIA0AA4jJ6U</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>report</recordtype></control><display><type>report</type><title>OPTIMAL PROCEDURE FOR AN N-STATE TESTING AND LEARNING PROCESS. II</title><source>DTIC Technical Reports</source><creator>Karush,W ; Dear,R. E</creator><creatorcontrib>Karush,W ; Dear,R. E ; SYSTEM DEVELOPMENT CORP SANTA MONICA CALIF</creatorcontrib><description>The paper is a continuation of work on optimal strategies for presentation of items in an N-trial learning experiment. In AD-611 056 and AD-610 696 it was shown (under certain assumptions) that the following decision rule generated an optimal sequencing: in any trial present the item for which the probability of being in the learned state is least. In the present paper it is shown that this rule is optimal for a more general learning model than any considered earlier; the new model allows for the possibility that a subject may respond incorrectly to a test on an item, even though he 'knows' the item. (Author)</description><language>eng</language><subject>CONDITIONED RESPONSE ; DECISION THEORY ; FUNCTIONS(MATHEMATICS) ; LEARNING ; MATHEMATICAL MODELS ; OPTIMIZATION ; PROBABILITY ; PSYCHOLOGICAL TESTS ; Psychology</subject><creationdate>1965</creationdate><rights>APPROVED FOR PUBLIC RELEASE</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,780,885,27566,27567</link.rule.ids><linktorsrc>$$Uhttps://apps.dtic.mil/sti/citations/AD0623770$$EView_record_in_DTIC$$FView_record_in_$$GDTIC$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Karush,W</creatorcontrib><creatorcontrib>Dear,R. E</creatorcontrib><creatorcontrib>SYSTEM DEVELOPMENT CORP SANTA MONICA CALIF</creatorcontrib><title>OPTIMAL PROCEDURE FOR AN N-STATE TESTING AND LEARNING PROCESS. II</title><description>The paper is a continuation of work on optimal strategies for presentation of items in an N-trial learning experiment. In AD-611 056 and AD-610 696 it was shown (under certain assumptions) that the following decision rule generated an optimal sequencing: in any trial present the item for which the probability of being in the learned state is least. In the present paper it is shown that this rule is optimal for a more general learning model than any considered earlier; the new model allows for the possibility that a subject may respond incorrectly to a test on an item, even though he 'knows' the item. (Author)</description><subject>CONDITIONED RESPONSE</subject><subject>DECISION THEORY</subject><subject>FUNCTIONS(MATHEMATICS)</subject><subject>LEARNING</subject><subject>MATHEMATICAL MODELS</subject><subject>OPTIMIZATION</subject><subject>PROBABILITY</subject><subject>PSYCHOLOGICAL TESTS</subject><subject>Psychology</subject><fulltext>true</fulltext><rsrctype>report</rsrctype><creationdate>1965</creationdate><recordtype>report</recordtype><sourceid>1RU</sourceid><recordid>eNrjZHD0Dwjx9HX0UQgI8nd2dQkNclVw8w9ScPRT8NMNDnEMcVUIcQ0O8fRzBwq5KPi4Ogb5gThg1cHBegqenjwMrGmJOcWpvFCam0HGzTXE2UM3pSQzOb64JDMvtSTe0cXAzMjY3NzAmIA0AA4jJ6U</recordid><startdate>19651018</startdate><enddate>19651018</enddate><creator>Karush,W</creator><creator>Dear,R. E</creator><scope>1RU</scope><scope>BHM</scope></search><sort><creationdate>19651018</creationdate><title>OPTIMAL PROCEDURE FOR AN N-STATE TESTING AND LEARNING PROCESS. II</title><author>Karush,W ; Dear,R. E</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-dtic_stinet_AD06237703</frbrgroupid><rsrctype>reports</rsrctype><prefilter>reports</prefilter><language>eng</language><creationdate>1965</creationdate><topic>CONDITIONED RESPONSE</topic><topic>DECISION THEORY</topic><topic>FUNCTIONS(MATHEMATICS)</topic><topic>LEARNING</topic><topic>MATHEMATICAL MODELS</topic><topic>OPTIMIZATION</topic><topic>PROBABILITY</topic><topic>PSYCHOLOGICAL TESTS</topic><topic>Psychology</topic><toplevel>online_resources</toplevel><creatorcontrib>Karush,W</creatorcontrib><creatorcontrib>Dear,R. E</creatorcontrib><creatorcontrib>SYSTEM DEVELOPMENT CORP SANTA MONICA CALIF</creatorcontrib><collection>DTIC Technical Reports</collection><collection>DTIC STINET</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Karush,W</au><au>Dear,R. E</au><aucorp>SYSTEM DEVELOPMENT CORP SANTA MONICA CALIF</aucorp><format>book</format><genre>unknown</genre><ristype>RPRT</ristype><btitle>OPTIMAL PROCEDURE FOR AN N-STATE TESTING AND LEARNING PROCESS. II</btitle><date>1965-10-18</date><risdate>1965</risdate><abstract>The paper is a continuation of work on optimal strategies for presentation of items in an N-trial learning experiment. In AD-611 056 and AD-610 696 it was shown (under certain assumptions) that the following decision rule generated an optimal sequencing: in any trial present the item for which the probability of being in the learned state is least. In the present paper it is shown that this rule is optimal for a more general learning model than any considered earlier; the new model allows for the possibility that a subject may respond incorrectly to a test on an item, even though he 'knows' the item. (Author)</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_dtic_stinet_AD0623770
source DTIC Technical Reports
subjects CONDITIONED RESPONSE
DECISION THEORY
FUNCTIONS(MATHEMATICS)
LEARNING
MATHEMATICAL MODELS
OPTIMIZATION
PROBABILITY
PSYCHOLOGICAL TESTS
Psychology
title OPTIMAL PROCEDURE FOR AN N-STATE TESTING AND LEARNING PROCESS. II
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T10%3A34%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-dtic_1RU&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=unknown&rft.btitle=OPTIMAL%20PROCEDURE%20FOR%20AN%20N-STATE%20TESTING%20AND%20LEARNING%20PROCESS.%20II&rft.au=Karush,W&rft.aucorp=SYSTEM%20DEVELOPMENT%20CORP%20SANTA%20MONICA%20CALIF&rft.date=1965-10-18&rft_id=info:doi/&rft_dat=%3Cdtic_1RU%3EAD0623770%3C/dtic_1RU%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true