Towards Better User Studies in Computer Graphics and Vision
Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like "which image is better, A or B?", leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitat...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-04 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Bylinskii, Zoya Herman, Laura Hertzmann, Aaron Hutka, Stefanie Zhang, Yile |
description | Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like "which image is better, A or B?", leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitative evidence in support of a paper's contributions. On the one hand we argue that, when conducted hastily as an afterthought, such studies lead to an increase of uninformative, and, potentially, misleading conclusions. On the other hand, in these same communities, user research is underutilized in driving project direction and forecasting user needs and reception. We call for increased attention to both the design and reporting of user studies in computer vision and graphics papers towards (1) improved replicability and (2) improved project direction. Together with this call, we offer an overview of methodologies from user experience research (UXR), human-computer interaction (HCI), and applied perception to increase exposure to the available methodologies and best practices. We discuss foundational user research methods (e.g., needfinding) that are presently underutilized in computer vision and graphics research, but can provide valuable project direction. We provide further pointers to the literature for readers interested in exploring other UXR methodologies. Finally, we describe broader open issues and recommendations for the research community. |
doi_str_mv | 10.48550/arxiv.2206.11461 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2206_11461</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2680440499</sourcerecordid><originalsourceid>FETCH-LOGICAL-a951-73c7b4f28912ede4872ffa6ecd6a499c407c66f09fc8d47f25a78cd4e12017153</originalsourceid><addsrcrecordid>eNotj8tOwzAURC0kJKrSD2CFJdYJ9vUzYgUVFKRKLAhsI-OHcEWTYCc8_r5py2ZmMaO59yB0QUnJtRDk2qTf-F0CEFlSyiU9QTNgjBaaA5yhRc4bQghIBUKwGbqpux-TXMZ3fhh8wq95kpdhdNFnHFu87Lb9uA9WyfQf0WZsWoffYo5de45Og_nMfvHvc1Q_3NfLx2L9vHpa3q4LUwlaKGbVOw-gKwreea4VhGCkt04aXlWWE2WlDKQKVjuuAgijtHXcUyBUUcHm6PI4eyBr-hS3Jv01e8LmQDg1ro6NPnVfo89Ds-nG1E4_NSA14ZxMh9gOujpSgA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2680440499</pqid></control><display><type>article</type><title>Towards Better User Studies in Computer Graphics and Vision</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Bylinskii, Zoya ; Herman, Laura ; Hertzmann, Aaron ; Hutka, Stefanie ; Zhang, Yile</creator><creatorcontrib>Bylinskii, Zoya ; Herman, Laura ; Hertzmann, Aaron ; Hutka, Stefanie ; Zhang, Yile</creatorcontrib><description>Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like "which image is better, A or B?", leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitative evidence in support of a paper's contributions. On the one hand we argue that, when conducted hastily as an afterthought, such studies lead to an increase of uninformative, and, potentially, misleading conclusions. On the other hand, in these same communities, user research is underutilized in driving project direction and forecasting user needs and reception. We call for increased attention to both the design and reporting of user studies in computer vision and graphics papers towards (1) improved replicability and (2) improved project direction. Together with this call, we offer an overview of methodologies from user experience research (UXR), human-computer interaction (HCI), and applied perception to increase exposure to the available methodologies and best practices. We discuss foundational user research methods (e.g., needfinding) that are presently underutilized in computer vision and graphics research, but can provide valuable project direction. We provide further pointers to the literature for readers interested in exploring other UXR methodologies. Finally, we describe broader open issues and recommendations for the research community.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2206.11461</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computer graphics ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Graphics ; Computer Science - Human-Computer Interaction ; Computer vision ; Human-computer interface ; Research projects ; Scientific papers ; User experience</subject><ispartof>arXiv.org, 2023-04</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.11461$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1561/0600000106$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Bylinskii, Zoya</creatorcontrib><creatorcontrib>Herman, Laura</creatorcontrib><creatorcontrib>Hertzmann, Aaron</creatorcontrib><creatorcontrib>Hutka, Stefanie</creatorcontrib><creatorcontrib>Zhang, Yile</creatorcontrib><title>Towards Better User Studies in Computer Graphics and Vision</title><title>arXiv.org</title><description>Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like "which image is better, A or B?", leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitative evidence in support of a paper's contributions. On the one hand we argue that, when conducted hastily as an afterthought, such studies lead to an increase of uninformative, and, potentially, misleading conclusions. On the other hand, in these same communities, user research is underutilized in driving project direction and forecasting user needs and reception. We call for increased attention to both the design and reporting of user studies in computer vision and graphics papers towards (1) improved replicability and (2) improved project direction. Together with this call, we offer an overview of methodologies from user experience research (UXR), human-computer interaction (HCI), and applied perception to increase exposure to the available methodologies and best practices. We discuss foundational user research methods (e.g., needfinding) that are presently underutilized in computer vision and graphics research, but can provide valuable project direction. We provide further pointers to the literature for readers interested in exploring other UXR methodologies. Finally, we describe broader open issues and recommendations for the research community.</description><subject>Algorithms</subject><subject>Computer graphics</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Graphics</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer vision</subject><subject>Human-computer interface</subject><subject>Research projects</subject><subject>Scientific papers</subject><subject>User experience</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURC0kJKrSD2CFJdYJ9vUzYgUVFKRKLAhsI-OHcEWTYCc8_r5py2ZmMaO59yB0QUnJtRDk2qTf-F0CEFlSyiU9QTNgjBaaA5yhRc4bQghIBUKwGbqpux-TXMZ3fhh8wq95kpdhdNFnHFu87Lb9uA9WyfQf0WZsWoffYo5de45Og_nMfvHvc1Q_3NfLx2L9vHpa3q4LUwlaKGbVOw-gKwreea4VhGCkt04aXlWWE2WlDKQKVjuuAgijtHXcUyBUUcHm6PI4eyBr-hS3Jv01e8LmQDg1ro6NPnVfo89Ds-nG1E4_NSA14ZxMh9gOujpSgA</recordid><startdate>20230424</startdate><enddate>20230424</enddate><creator>Bylinskii, Zoya</creator><creator>Herman, Laura</creator><creator>Hertzmann, Aaron</creator><creator>Hutka, Stefanie</creator><creator>Zhang, Yile</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230424</creationdate><title>Towards Better User Studies in Computer Graphics and Vision</title><author>Bylinskii, Zoya ; Herman, Laura ; Hertzmann, Aaron ; Hutka, Stefanie ; Zhang, Yile</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a951-73c7b4f28912ede4872ffa6ecd6a499c407c66f09fc8d47f25a78cd4e12017153</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Computer graphics</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Graphics</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer vision</topic><topic>Human-computer interface</topic><topic>Research projects</topic><topic>Scientific papers</topic><topic>User experience</topic><toplevel>online_resources</toplevel><creatorcontrib>Bylinskii, Zoya</creatorcontrib><creatorcontrib>Herman, Laura</creatorcontrib><creatorcontrib>Hertzmann, Aaron</creatorcontrib><creatorcontrib>Hutka, Stefanie</creatorcontrib><creatorcontrib>Zhang, Yile</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bylinskii, Zoya</au><au>Herman, Laura</au><au>Hertzmann, Aaron</au><au>Hutka, Stefanie</au><au>Zhang, Yile</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Better User Studies in Computer Graphics and Vision</atitle><jtitle>arXiv.org</jtitle><date>2023-04-24</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like "which image is better, A or B?", leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitative evidence in support of a paper's contributions. On the one hand we argue that, when conducted hastily as an afterthought, such studies lead to an increase of uninformative, and, potentially, misleading conclusions. On the other hand, in these same communities, user research is underutilized in driving project direction and forecasting user needs and reception. We call for increased attention to both the design and reporting of user studies in computer vision and graphics papers towards (1) improved replicability and (2) improved project direction. Together with this call, we offer an overview of methodologies from user experience research (UXR), human-computer interaction (HCI), and applied perception to increase exposure to the available methodologies and best practices. We discuss foundational user research methods (e.g., needfinding) that are presently underutilized in computer vision and graphics research, but can provide valuable project direction. We provide further pointers to the literature for readers interested in exploring other UXR methodologies. Finally, we describe broader open issues and recommendations for the research community.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2206.11461</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2206_11461 |
source | arXiv.org; Free E- Journals |
subjects | Algorithms Computer graphics Computer Science - Computer Vision and Pattern Recognition Computer Science - Graphics Computer Science - Human-Computer Interaction Computer vision Human-computer interface Research projects Scientific papers User experience |
title | Towards Better User Studies in Computer Graphics and Vision |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T13%3A43%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Better%20User%20Studies%20in%20Computer%20Graphics%20and%20Vision&rft.jtitle=arXiv.org&rft.au=Bylinskii,%20Zoya&rft.date=2023-04-24&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2206.11461&rft_dat=%3Cproquest_arxiv%3E2680440499%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2680440499&rft_id=info:pmid/&rfr_iscdi=true |