Multi-attribute, multi-alternative models of choice: Choice, reaction time, and process tracing

•A deep comparison of utility, heuristic, and dynamic models on the same data sets.•Different fit measures were used to compare models, including cross-validation.•Comparisons are based on both behavioral and process-tracing data.•An examination of model performance on both preferential and risky ch...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cognitive psychology 2017-11, Vol.98, p.45-72
Hauptverfasser: Cohen, Andrew L., Kang, Namyi, Leise, Tanya L.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A deep comparison of utility, heuristic, and dynamic models on the same data sets.•Different fit measures were used to compare models, including cross-validation.•Comparisons are based on both behavioral and process-tracing data.•An examination of model performance on both preferential and risky choice.•Development of a model framework that directly incorporates attention to values.•Development of an extension of the MLBA to multiple-attributes and risky choice.•A test of the MLBA on response time prediction. The first aim of this research is to compare computational models of multi-alternative, multi-attribute choice when attribute values are explicit. The choice predictions of utility (standard random utility & weighted valuation), heuristic (elimination-by-aspects, lexicographic, & maximum attribute value), and dynamic (multi-alternative decision field theory, MDFT, & a version of the multi-attribute linear ballistic accumulator, MLBA) models are contrasted on both preferential and risky choice data. Using both maximum likelihood and cross-validation fit measures on choice data, the utility and dynamic models are preferred over the heuristic models for risky choice, with a slight overall advantage for the MLBA for preferential choice. The response time predictions of these models (except the MDFT) are then tested. Although the MLBA accurately predicts response time distributions, it only weakly accounts for stimulus-level differences. The other models completely fail to account for stimulus-level differences. Process tracing measures, i.e., eye and mouse tracking, were also collected. None of the qualitative predictions of the models are completely supported by that data. These results suggest that the models may not appropriately represent the interaction of attention and preference formation. To overcome this potential shortcoming, the second aim of this research is to test preference-formation assumptions, independently of attention, by developing the models of attentional sampling (MAS) model family which incorporates the empirical gaze patterns into a sequential sampling framework. An MAS variant that includes attribute values, but only updates the currently viewed alternative and does not contrast values across alternatives, performs well in both experiments. Overall, the results support the dynamic models, but point to the need to incorporate a framework that more accurately reflects the relationship between attention and the preference-formatio
ISSN:0010-0285
1095-5623
DOI:10.1016/j.cogpsych.2017.08.001