A general multiview framework for assessing the quality of collaboratively created content on web 2.0
User‐generated content is one of the most interesting phenomena of current published media, as users are now able not only to consume, but also to produce content in a much faster and easier manner. However, such freedom also carries concerns about content quality. In this work, we propose an automa...
Gespeichert in:
Veröffentlicht in: | Journal of the Association for Information Science and Technology 2017-02, Vol.68 (2), p.286-308 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 308 |
---|---|
container_issue | 2 |
container_start_page | 286 |
container_title | Journal of the Association for Information Science and Technology |
container_volume | 68 |
creator | Dalip, Daniel H. Gonçalves, Marcos André Cristo, Marco Calado, Pável |
description | User‐generated content is one of the most interesting phenomena of current published media, as users are now able not only to consume, but also to produce content in a much faster and easier manner. However, such freedom also carries concerns about content quality. In this work, we propose an automatic framework to assess the quality of collaboratively generated content. Quality is addressed as a multidimensional concept, modeled as a combination of independent assessments, each regarding different quality dimensions. Accordingly, we adopt a machine‐learning (ML)‐based multiview approach to assess content quality. We perform a thorough analysis of our framework on two different domains: Questions and Answer Forums and Collaborative Encyclopedias. This allowed us to better understand when and how the proposed multiview approach is able to provide accurate quality assessments. Our main contributions are: (a) a general ML multiview framework that takes advantage of different views of quality indicators; (b) the improvement (up to 30%) in quality assessment over the best state‐of‐the‐art baseline methods; (c) a thorough feature and view analysis regarding impact, informativeness, and correlation, based on two distinct domains. |
doi_str_mv | 10.1002/asi.23650 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1880032272</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1880032272</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3500-72294dce688aa30de61757b2b020dfe93350d61306f45be9c4444076af3c8e013</originalsourceid><addsrcrecordid>eNp1kE1PwzAMhiMEEtPYgX-QIxy6OUk_j9PEx6RJHIBzlKbOKLTNlnRU_fcEirjhiy3reS3rIeSawZIB8JXy9ZKLNIEzMuNCQMTSWJz_zSK5JAvv3wGAQZEnnM0IrukeO3Sqoe2p6evPGgdqnGpxsO6DGuuo8h69r7s97d-QHk-qqfuRWkO1bRpVWqdCDJuRaoeqxyrsux67ntqODlhSvoQrcmFU43Hx2-fk9f7uZfMY7Z4etpv1LtIiAYgyzou40pjmuVICKkxZlmQlL4FDZbAQgapSJiA1cVJioeNQkKXKCJ0jMDEnN9Pdg7PHE_petrXXGN7s0J68ZHkOIDjPeEBvJ1Q7671DIw-ubpUbJQP5bVMGm_LHZmBXEzvUDY7_g3L9vJ0SXwlVdc0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1880032272</pqid></control><display><type>article</type><title>A general multiview framework for assessing the quality of collaboratively created content on web 2.0</title><source>Wiley Online Library Journals Frontfile Complete</source><source>EBSCOhost Business Source Complete</source><creator>Dalip, Daniel H. ; Gonçalves, Marcos André ; Cristo, Marco ; Calado, Pável</creator><creatorcontrib>Dalip, Daniel H. ; Gonçalves, Marcos André ; Cristo, Marco ; Calado, Pável</creatorcontrib><description>User‐generated content is one of the most interesting phenomena of current published media, as users are now able not only to consume, but also to produce content in a much faster and easier manner. However, such freedom also carries concerns about content quality. In this work, we propose an automatic framework to assess the quality of collaboratively generated content. Quality is addressed as a multidimensional concept, modeled as a combination of independent assessments, each regarding different quality dimensions. Accordingly, we adopt a machine‐learning (ML)‐based multiview approach to assess content quality. We perform a thorough analysis of our framework on two different domains: Questions and Answer Forums and Collaborative Encyclopedias. This allowed us to better understand when and how the proposed multiview approach is able to provide accurate quality assessments. Our main contributions are: (a) a general ML multiview framework that takes advantage of different views of quality indicators; (b) the improvement (up to 30%) in quality assessment over the best state‐of‐the‐art baseline methods; (c) a thorough feature and view analysis regarding impact, informativeness, and correlation, based on two distinct domains.</description><identifier>ISSN: 2330-1635</identifier><identifier>EISSN: 2330-1643</identifier><identifier>DOI: 10.1002/asi.23650</identifier><language>eng</language><subject>Assessments ; Consumption ; Encyclopaedias ; Impact analysis ; Indicators ; quality ; Quality assessment ; State of the art ; User-generated content</subject><ispartof>Journal of the Association for Information Science and Technology, 2017-02, Vol.68 (2), p.286-308</ispartof><rights>2016 ASIS&T</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3500-72294dce688aa30de61757b2b020dfe93350d61306f45be9c4444076af3c8e013</citedby><cites>FETCH-LOGICAL-c3500-72294dce688aa30de61757b2b020dfe93350d61306f45be9c4444076af3c8e013</cites><orcidid>0000-0001-6478-229X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fasi.23650$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fasi.23650$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27902,27903,45552,45553</link.rule.ids></links><search><creatorcontrib>Dalip, Daniel H.</creatorcontrib><creatorcontrib>Gonçalves, Marcos André</creatorcontrib><creatorcontrib>Cristo, Marco</creatorcontrib><creatorcontrib>Calado, Pável</creatorcontrib><title>A general multiview framework for assessing the quality of collaboratively created content on web 2.0</title><title>Journal of the Association for Information Science and Technology</title><description>User‐generated content is one of the most interesting phenomena of current published media, as users are now able not only to consume, but also to produce content in a much faster and easier manner. However, such freedom also carries concerns about content quality. In this work, we propose an automatic framework to assess the quality of collaboratively generated content. Quality is addressed as a multidimensional concept, modeled as a combination of independent assessments, each regarding different quality dimensions. Accordingly, we adopt a machine‐learning (ML)‐based multiview approach to assess content quality. We perform a thorough analysis of our framework on two different domains: Questions and Answer Forums and Collaborative Encyclopedias. This allowed us to better understand when and how the proposed multiview approach is able to provide accurate quality assessments. Our main contributions are: (a) a general ML multiview framework that takes advantage of different views of quality indicators; (b) the improvement (up to 30%) in quality assessment over the best state‐of‐the‐art baseline methods; (c) a thorough feature and view analysis regarding impact, informativeness, and correlation, based on two distinct domains.</description><subject>Assessments</subject><subject>Consumption</subject><subject>Encyclopaedias</subject><subject>Impact analysis</subject><subject>Indicators</subject><subject>quality</subject><subject>Quality assessment</subject><subject>State of the art</subject><subject>User-generated content</subject><issn>2330-1635</issn><issn>2330-1643</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><recordid>eNp1kE1PwzAMhiMEEtPYgX-QIxy6OUk_j9PEx6RJHIBzlKbOKLTNlnRU_fcEirjhiy3reS3rIeSawZIB8JXy9ZKLNIEzMuNCQMTSWJz_zSK5JAvv3wGAQZEnnM0IrukeO3Sqoe2p6evPGgdqnGpxsO6DGuuo8h69r7s97d-QHk-qqfuRWkO1bRpVWqdCDJuRaoeqxyrsux67ntqODlhSvoQrcmFU43Hx2-fk9f7uZfMY7Z4etpv1LtIiAYgyzou40pjmuVICKkxZlmQlL4FDZbAQgapSJiA1cVJioeNQkKXKCJ0jMDEnN9Pdg7PHE_petrXXGN7s0J68ZHkOIDjPeEBvJ1Q7671DIw-ubpUbJQP5bVMGm_LHZmBXEzvUDY7_g3L9vJ0SXwlVdc0</recordid><startdate>201702</startdate><enddate>201702</enddate><creator>Dalip, Daniel H.</creator><creator>Gonçalves, Marcos André</creator><creator>Cristo, Marco</creator><creator>Calado, Pável</creator><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-6478-229X</orcidid></search><sort><creationdate>201702</creationdate><title>A general multiview framework for assessing the quality of collaboratively created content on web 2.0</title><author>Dalip, Daniel H. ; Gonçalves, Marcos André ; Cristo, Marco ; Calado, Pável</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3500-72294dce688aa30de61757b2b020dfe93350d61306f45be9c4444076af3c8e013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Assessments</topic><topic>Consumption</topic><topic>Encyclopaedias</topic><topic>Impact analysis</topic><topic>Indicators</topic><topic>quality</topic><topic>Quality assessment</topic><topic>State of the art</topic><topic>User-generated content</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dalip, Daniel H.</creatorcontrib><creatorcontrib>Gonçalves, Marcos André</creatorcontrib><creatorcontrib>Cristo, Marco</creatorcontrib><creatorcontrib>Calado, Pável</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of the Association for Information Science and Technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dalip, Daniel H.</au><au>Gonçalves, Marcos André</au><au>Cristo, Marco</au><au>Calado, Pável</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A general multiview framework for assessing the quality of collaboratively created content on web 2.0</atitle><jtitle>Journal of the Association for Information Science and Technology</jtitle><date>2017-02</date><risdate>2017</risdate><volume>68</volume><issue>2</issue><spage>286</spage><epage>308</epage><pages>286-308</pages><issn>2330-1635</issn><eissn>2330-1643</eissn><abstract>User‐generated content is one of the most interesting phenomena of current published media, as users are now able not only to consume, but also to produce content in a much faster and easier manner. However, such freedom also carries concerns about content quality. In this work, we propose an automatic framework to assess the quality of collaboratively generated content. Quality is addressed as a multidimensional concept, modeled as a combination of independent assessments, each regarding different quality dimensions. Accordingly, we adopt a machine‐learning (ML)‐based multiview approach to assess content quality. We perform a thorough analysis of our framework on two different domains: Questions and Answer Forums and Collaborative Encyclopedias. This allowed us to better understand when and how the proposed multiview approach is able to provide accurate quality assessments. Our main contributions are: (a) a general ML multiview framework that takes advantage of different views of quality indicators; (b) the improvement (up to 30%) in quality assessment over the best state‐of‐the‐art baseline methods; (c) a thorough feature and view analysis regarding impact, informativeness, and correlation, based on two distinct domains.</abstract><doi>10.1002/asi.23650</doi><tpages>23</tpages><orcidid>https://orcid.org/0000-0001-6478-229X</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2330-1635 |
ispartof | Journal of the Association for Information Science and Technology, 2017-02, Vol.68 (2), p.286-308 |
issn | 2330-1635 2330-1643 |
language | eng |
recordid | cdi_proquest_miscellaneous_1880032272 |
source | Wiley Online Library Journals Frontfile Complete; EBSCOhost Business Source Complete |
subjects | Assessments Consumption Encyclopaedias Impact analysis Indicators quality Quality assessment State of the art User-generated content |
title | A general multiview framework for assessing the quality of collaboratively created content on web 2.0 |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T09%3A45%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20general%20multiview%20framework%20for%20assessing%20the%20quality%20of%20collaboratively%20created%20content%20on%20web%202.0&rft.jtitle=Journal%20of%20the%20Association%20for%20Information%20Science%20and%20Technology&rft.au=Dalip,%20Daniel%20H.&rft.date=2017-02&rft.volume=68&rft.issue=2&rft.spage=286&rft.epage=308&rft.pages=286-308&rft.issn=2330-1635&rft.eissn=2330-1643&rft_id=info:doi/10.1002/asi.23650&rft_dat=%3Cproquest_cross%3E1880032272%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1880032272&rft_id=info:pmid/&rfr_iscdi=true |