What Cognitive Interviewing Reveals about a New Measure of Undergraduate Biology Reasoning

Reasoning skills have been clearly related to achievement in introductory undergraduate biology, a course with a high failure rate that may contribute to dropout of undergraduate STEM majors. Existing measures are focused on the experimental method, such as generating hypotheses, choosing a research...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of experimental education 2021-01, Vol.89 (1), p.145-168
Hauptverfasser: Cromley, Jennifer G., Dai, Ting, Fechter, Tia, Van Boekel, Martin, Nelson, Frank E., Dane, Aygul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Reasoning skills have been clearly related to achievement in introductory undergraduate biology, a course with a high failure rate that may contribute to dropout of undergraduate STEM majors. Existing measures are focused on the experimental method, such as generating hypotheses, choosing a research method, how to control variables other than those manipulated in an experiment, analyzing data (e.g., naming independent and dependent variables), and drawing conclusions from results. We developed a new measure called inference making and reasoning in biology (IMRB) that tests deductive reasoning in biology outside of the context of the experimental method, using not previously taught biology content. We present results from coded cognitive interviews with 86 undergraduate biology students completing the IMRB, using within-subjects comparisons of verbalizations when questions are answered correctly versus incorrectly. Results suggest that the IMRB taps local and global inferences but not knowledge acquired before study or elaborative inferences that require such knowledge. For the most part, reading comprehension/study strategies do not help examinees answer IMRB questions correctly, except for recalling information learned earlier in the measure, summarizing, paraphrasing, skimming, and noting text structure. Likewise, test-taking strategies do not help examinees answer IMRB questions correctly, except for noting that a passage had not mentioned specific information. Similarly, vocabulary did not help examinees answer IMRB questions correctly. With regard to metacognitive monitoring, when questions were answered incorrectly, examinees more often noted a lack of understanding. Thus, we present strong validity evidence for the IMRB, which is available to STEM researchers and measurement experts.
ISSN:0022-0973
1940-0683
DOI:10.1080/00220973.2019.1613338