Development of a Dual-Modal Presentation of Texts for Small Screens
Baddeley's (1986) working memory model suggests that imagery spatial information and verbal information can be concurrently held in different subsystems. This research proposed a method to present textual information with network relationships in a "graphics + voice" format, especiall...
Gespeichert in:
Veröffentlicht in: | International journal of human-computer interaction 2008-12, Vol.24 (8), p.776-793 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 793 |
---|---|
container_issue | 8 |
container_start_page | 776 |
container_title | International journal of human-computer interaction |
container_volume | 24 |
creator | Xu, Shuang Fang, Xiaowen Brzezinski, Jacek Chan, Susy |
description | Baddeley's (1986)
working memory model suggests that imagery spatial information and verbal information can be concurrently held in different subsystems. This research proposed a method to present textual information with network relationships in a "graphics + voice" format, especially for small screens. It was hypothesized that this dual-modal presentation would result in superior comprehension performance and higher acceptance than pure textual display. An experiment was carried out to test this hypothesis with analytical problems from the Graduate Record Examination. Thirty individuals participated in this experiment. The results indicate that users' performance and acceptance were improved significantly by using the "graphic + voice" presentation. The article concludes with a discussion of the implications and limitations of the findings for future research in multimodal interface design. |
doi_str_mv | 10.1080/10447310802537566 |
format | Article |
fullrecord | <record><control><sourceid>proquest_infor</sourceid><recordid>TN_cdi_proquest_journals_228820950</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>35367357</sourcerecordid><originalsourceid>FETCH-LOGICAL-c404t-7e0649e577bae5e852aa478368d1478589ef5754156640c1ff3bc7640222e7653</originalsourceid><addsrcrecordid>eNqFkE1LAzEQhoMoWKs_wNviwdtqPjdZ8CKtX1BRaD2HdDuBLdlNTVJt_70p9aSIp3mZeZ-Z4UXonOArghW-JphzyXaSCiZFVR2gARGMllLU-DDrPC-zQR2jkxiXGGOKBRug0Rg-wPlVB30qvC1MMV4bVz77hXHFa4CY-ya1vt8NZ7BJsbA-FNPOOFdMmwDQx1N0ZI2LcPZdh-jt_m42eiwnLw9Po9tJ2XDMUykBV7wGIeXcgAAlqDFcKlapBclVqBqskIKT_D3HDbGWzRuZJaUUZCXYEF3u966Cf19DTLprYwPOmR78OmomWCWZkNl48cO49OvQ5980pUpRXAucTWRvaoKPMYDVq9B2Jmw1wXoXpP6VaWbknmn7nEJnPn1wC53M1vlgg-mbNv6mdNqkTN78S7K_D38B_cOLiA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>228820950</pqid></control><display><type>article</type><title>Development of a Dual-Modal Presentation of Texts for Small Screens</title><source>Business Source Complete</source><creator>Xu, Shuang ; Fang, Xiaowen ; Brzezinski, Jacek ; Chan, Susy</creator><creatorcontrib>Xu, Shuang ; Fang, Xiaowen ; Brzezinski, Jacek ; Chan, Susy</creatorcontrib><description>Baddeley's (1986)
working memory model suggests that imagery spatial information and verbal information can be concurrently held in different subsystems. This research proposed a method to present textual information with network relationships in a "graphics + voice" format, especially for small screens. It was hypothesized that this dual-modal presentation would result in superior comprehension performance and higher acceptance than pure textual display. An experiment was carried out to test this hypothesis with analytical problems from the Graduate Record Examination. Thirty individuals participated in this experiment. The results indicate that users' performance and acceptance were improved significantly by using the "graphic + voice" presentation. The article concludes with a discussion of the implications and limitations of the findings for future research in multimodal interface design.</description><identifier>ISSN: 1044-7318</identifier><identifier>EISSN: 1532-7590</identifier><identifier>EISSN: 1044-7318</identifier><identifier>DOI: 10.1080/10447310802537566</identifier><identifier>CODEN: IJHIEC</identifier><language>eng</language><publisher>Norwood: Taylor & Francis Group</publisher><subject>Experiments ; Information ; Interfaces ; Memory</subject><ispartof>International journal of human-computer interaction, 2008-12, Vol.24 (8), p.776-793</ispartof><rights>Copyright Taylor & Francis Group, LLC 2008</rights><rights>Copyright Lawrence Erlbaum Associates, Inc. Nov 2008</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c404t-7e0649e577bae5e852aa478368d1478589ef5754156640c1ff3bc7640222e7653</citedby><cites>FETCH-LOGICAL-c404t-7e0649e577bae5e852aa478368d1478589ef5754156640c1ff3bc7640222e7653</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Xu, Shuang</creatorcontrib><creatorcontrib>Fang, Xiaowen</creatorcontrib><creatorcontrib>Brzezinski, Jacek</creatorcontrib><creatorcontrib>Chan, Susy</creatorcontrib><title>Development of a Dual-Modal Presentation of Texts for Small Screens</title><title>International journal of human-computer interaction</title><description>Baddeley's (1986)
working memory model suggests that imagery spatial information and verbal information can be concurrently held in different subsystems. This research proposed a method to present textual information with network relationships in a "graphics + voice" format, especially for small screens. It was hypothesized that this dual-modal presentation would result in superior comprehension performance and higher acceptance than pure textual display. An experiment was carried out to test this hypothesis with analytical problems from the Graduate Record Examination. Thirty individuals participated in this experiment. The results indicate that users' performance and acceptance were improved significantly by using the "graphic + voice" presentation. The article concludes with a discussion of the implications and limitations of the findings for future research in multimodal interface design.</description><subject>Experiments</subject><subject>Information</subject><subject>Interfaces</subject><subject>Memory</subject><issn>1044-7318</issn><issn>1532-7590</issn><issn>1044-7318</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2008</creationdate><recordtype>article</recordtype><recordid>eNqFkE1LAzEQhoMoWKs_wNviwdtqPjdZ8CKtX1BRaD2HdDuBLdlNTVJt_70p9aSIp3mZeZ-Z4UXonOArghW-JphzyXaSCiZFVR2gARGMllLU-DDrPC-zQR2jkxiXGGOKBRug0Rg-wPlVB30qvC1MMV4bVz77hXHFa4CY-ya1vt8NZ7BJsbA-FNPOOFdMmwDQx1N0ZI2LcPZdh-jt_m42eiwnLw9Po9tJ2XDMUykBV7wGIeXcgAAlqDFcKlapBclVqBqskIKT_D3HDbGWzRuZJaUUZCXYEF3u966Cf19DTLprYwPOmR78OmomWCWZkNl48cO49OvQ5980pUpRXAucTWRvaoKPMYDVq9B2Jmw1wXoXpP6VaWbknmn7nEJnPn1wC53M1vlgg-mbNv6mdNqkTN78S7K_D38B_cOLiA</recordid><startdate>20081212</startdate><enddate>20081212</enddate><creator>Xu, Shuang</creator><creator>Fang, Xiaowen</creator><creator>Brzezinski, Jacek</creator><creator>Chan, Susy</creator><general>Taylor & Francis Group</general><general>Lawrence Erlbaum Associates, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>E3H</scope><scope>F2A</scope><scope>JQ2</scope><scope>7SC</scope><scope>8FD</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20081212</creationdate><title>Development of a Dual-Modal Presentation of Texts for Small Screens</title><author>Xu, Shuang ; Fang, Xiaowen ; Brzezinski, Jacek ; Chan, Susy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c404t-7e0649e577bae5e852aa478368d1478589ef5754156640c1ff3bc7640222e7653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2008</creationdate><topic>Experiments</topic><topic>Information</topic><topic>Interfaces</topic><topic>Memory</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xu, Shuang</creatorcontrib><creatorcontrib>Fang, Xiaowen</creatorcontrib><creatorcontrib>Brzezinski, Jacek</creatorcontrib><creatorcontrib>Chan, Susy</creatorcontrib><collection>CrossRef</collection><collection>Library & Information Sciences Abstracts (LISA)</collection><collection>Library & Information Science Abstracts (LISA)</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>International journal of human-computer interaction</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xu, Shuang</au><au>Fang, Xiaowen</au><au>Brzezinski, Jacek</au><au>Chan, Susy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Development of a Dual-Modal Presentation of Texts for Small Screens</atitle><jtitle>International journal of human-computer interaction</jtitle><date>2008-12-12</date><risdate>2008</risdate><volume>24</volume><issue>8</issue><spage>776</spage><epage>793</epage><pages>776-793</pages><issn>1044-7318</issn><eissn>1532-7590</eissn><eissn>1044-7318</eissn><coden>IJHIEC</coden><abstract>Baddeley's (1986)
working memory model suggests that imagery spatial information and verbal information can be concurrently held in different subsystems. This research proposed a method to present textual information with network relationships in a "graphics + voice" format, especially for small screens. It was hypothesized that this dual-modal presentation would result in superior comprehension performance and higher acceptance than pure textual display. An experiment was carried out to test this hypothesis with analytical problems from the Graduate Record Examination. Thirty individuals participated in this experiment. The results indicate that users' performance and acceptance were improved significantly by using the "graphic + voice" presentation. The article concludes with a discussion of the implications and limitations of the findings for future research in multimodal interface design.</abstract><cop>Norwood</cop><pub>Taylor & Francis Group</pub><doi>10.1080/10447310802537566</doi><tpages>18</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1044-7318 |
ispartof | International journal of human-computer interaction, 2008-12, Vol.24 (8), p.776-793 |
issn | 1044-7318 1532-7590 1044-7318 |
language | eng |
recordid | cdi_proquest_journals_228820950 |
source | Business Source Complete |
subjects | Experiments Information Interfaces Memory |
title | Development of a Dual-Modal Presentation of Texts for Small Screens |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T17%3A23%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_infor&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Development%20of%20a%20Dual-Modal%20Presentation%20of%20Texts%20for%20Small%20Screens&rft.jtitle=International%20journal%20of%20human-computer%20interaction&rft.au=Xu,%20Shuang&rft.date=2008-12-12&rft.volume=24&rft.issue=8&rft.spage=776&rft.epage=793&rft.pages=776-793&rft.issn=1044-7318&rft.eissn=1532-7590&rft.coden=IJHIEC&rft_id=info:doi/10.1080/10447310802537566&rft_dat=%3Cproquest_infor%3E35367357%3C/proquest_infor%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=228820950&rft_id=info:pmid/&rfr_iscdi=true |