How Consistent Are the Best-Known Readability Equations in Estimating the Readability of Design Standards?
Research problem: Readability equations are widely used to compute how well readers will be able to understand written materials. Those equations were usually developed for nontechnical materials, namely, textbooks for elementary, middle, and high schools. This study examines to what extent computer...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on professional communication 2017-03, Vol.60 (1), p.97-111 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Research problem: Readability equations are widely used to compute how well readers will be able to understand written materials. Those equations were usually developed for nontechnical materials, namely, textbooks for elementary, middle, and high schools. This study examines to what extent computerized readability predictions are consistent for highly technical material - selected Society of Automotive Engineers (SAE) and International Standards Organization (ISO) Recommended Practices and Standards relating to driver interfaces. Literature review: A review of original sources of readability equations revealed a lack of specific criteria in counting various punctuation and text elements, leading to inconsistent readability scores. Few studies on the reliability of readability equations have identified this problem, and even fewer have systematically investigated the extent of the problem and the reasons why it occurs. Research questions: (1) Do the most commonly used equations give identical readability scores? (2) How do the scores for each readability equation vary with readability tools? (3) If there are differences between readability tools, why do they occur? (4) How does the score vary with the length of passage examined? Method: Passages of varying lengths from 12 selected SAE and ISO Recommended Practices and Standards were examined using five readability equations (Flesch-Kincaid Grade Level, Gunning Fog Index, SMOG Index, Coleman-Liau Index, and Automated Readability Index) implemented five ways (four online readability tools and Microsoft Word 2013 for Windows). In addition, short test passages of text were used to understand how different readability tools counted text elements, such as words and sentences. Results and conclusions: The mean readability scores of the passages from those 12 SAE and ISO Recommended Practices and Standards ranged from the 10th grade reading level to about 15th. The mean grade reading levels computed across the websites were: Flesch-Kincaid 12.8, Gunning Fog 15.1 SMOG 12.6, Coleman-Liau 13.7, and Automated Readability Index 12.3. Readability score estimates became more consistent as the length of the passage examined increased, with no noteworthy improvements beyond 900 words. Among the five readability tools, scores typically differed by two grade levels, but the scores should have been the same. These differences were due to how compound and hyphenated words, slashes, numbers, abbreviations and acronyms, and URLs |
---|---|
ISSN: | 0361-1434 1558-1500 |
DOI: | 10.1109/TPC.2016.2635720 |