Empirical validation of a quality framework for evaluating modelling languages in MDE environments

In previous research, we proposed the multiple modelling quality evaluation framework (MMQEF), which is a method and tool for evaluating modelling languages in model-driven engineering (MDE) environments. Rather than being exclusive, MMQEF attempts to complement other methods of evaluation of qualit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Software quality journal 2021-06, Vol.29 (2), p.275-307
Hauptverfasser: Giraldo, Fáber D., Chicaiza, Ángela J., España, Sergio, Pastor, Óscar
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 307
container_issue 2
container_start_page 275
container_title Software quality journal
container_volume 29
creator Giraldo, Fáber D.
Chicaiza, Ángela J.
España, Sergio
Pastor, Óscar
description In previous research, we proposed the multiple modelling quality evaluation framework (MMQEF), which is a method and tool for evaluating modelling languages in model-driven engineering (MDE) environments. Rather than being exclusive, MMQEF attempts to complement other methods of evaluation of quality such as SEQUAL. However, to date, MMQEF has not been validated beyond some concept proofs. This paper evaluates the applicability of the MMQEF method in comparison with other existing methods. We performed an evaluation in which the subjects had to detect quality issues in modelling languages. A group of expert professionals and two experimental objects (i.e. two combinations of different modelling languages based on real industrial practices) were used. To analyse the results, we applied quantitative approaches, i.e. statistical tests on the results of the performance measures and the perception of subjects. We ran four replications of the experiment in Colombia between 2016 and 2019, with a total of 50 professionals. The results of the quantitative analysis show a low performance for all of the methods, but a positive perception of MMQEF. Conclusions: The application of modelling language quality evaluation methods within MDE settings is indeed tricky, and subjects did not succeed in identifying all quality problems. This experiment paves the way for additional investigation on the trade-offs between the methods and potential situational guidelines (i.e. circumstances under which each method is convenient). We encourage further inquiries on industrial applications to incrementally improve the method and tailor it to the needs of professionals working in real industrial environments.
doi_str_mv 10.1007/s11219-021-09554-1
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2537006493</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2537006493</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-866cae33287d318d70d1e82a1509e580d77a4dca846361519e006762722a22fb3</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EEqXwA6wssTZ47MSOl6iUh1TEBtaWmzhRSmK3dlLUv8clSOxYzWh07h3pIHQN9BYolXcRgIEilAGhKs8zAidoBrnkBLiQp2hGleBEccjO0UWMG0qPsWyG1st-24a2NB3em66tzNB6h32NDd6N6TAccB1Mb798-MS1D9gmbEyUa3DvK9t1x60zrhlNYyNuHX59WGLr9m3wrrduiJforDZdtFe_c44-Hpfvi2eyent6WdyvSMlBDaQQojSWc1bIikNRSVqBLZiBnCqbF7SS0mRVaYpMcAE5KEupkIJJxgxj9ZrP0c3Uuw1-N9o46I0fg0svNcu5THSmeKLYRJXBxxhsrbeh7U04aKD6KEVPLnVyqX9cakghPoVigl1jw1_1P6lvfc122A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2537006493</pqid></control><display><type>article</type><title>Empirical validation of a quality framework for evaluating modelling languages in MDE environments</title><source>SpringerLink Journals - AutoHoldings</source><creator>Giraldo, Fáber D. ; Chicaiza, Ángela J. ; España, Sergio ; Pastor, Óscar</creator><creatorcontrib>Giraldo, Fáber D. ; Chicaiza, Ángela J. ; España, Sergio ; Pastor, Óscar</creatorcontrib><description>In previous research, we proposed the multiple modelling quality evaluation framework (MMQEF), which is a method and tool for evaluating modelling languages in model-driven engineering (MDE) environments. Rather than being exclusive, MMQEF attempts to complement other methods of evaluation of quality such as SEQUAL. However, to date, MMQEF has not been validated beyond some concept proofs. This paper evaluates the applicability of the MMQEF method in comparison with other existing methods. We performed an evaluation in which the subjects had to detect quality issues in modelling languages. A group of expert professionals and two experimental objects (i.e. two combinations of different modelling languages based on real industrial practices) were used. To analyse the results, we applied quantitative approaches, i.e. statistical tests on the results of the performance measures and the perception of subjects. We ran four replications of the experiment in Colombia between 2016 and 2019, with a total of 50 professionals. The results of the quantitative analysis show a low performance for all of the methods, but a positive perception of MMQEF. Conclusions: The application of modelling language quality evaluation methods within MDE settings is indeed tricky, and subjects did not succeed in identifying all quality problems. This experiment paves the way for additional investigation on the trade-offs between the methods and potential situational guidelines (i.e. circumstances under which each method is convenient). We encourage further inquiries on industrial applications to incrementally improve the method and tailor it to the needs of professionals working in real industrial environments.</description><identifier>ISSN: 0963-9314</identifier><identifier>EISSN: 1573-1367</identifier><identifier>DOI: 10.1007/s11219-021-09554-1</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Compilers ; Computer Science ; Data Structures and Information Theory ; Empirical analysis ; Environment models ; Industrial applications ; Interpreters ; Languages ; Methods ; Operating Systems ; Perception ; Programming Languages ; Quality assessment ; Quantitative analysis ; Software engineering ; Software Engineering/Programming and Operating Systems ; Statistical tests</subject><ispartof>Software quality journal, 2021-06, Vol.29 (2), p.275-307</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-866cae33287d318d70d1e82a1509e580d77a4dca846361519e006762722a22fb3</citedby><cites>FETCH-LOGICAL-c319t-866cae33287d318d70d1e82a1509e580d77a4dca846361519e006762722a22fb3</cites><orcidid>0000-0003-2706-6504 ; 0000-0002-6111-3055</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11219-021-09554-1$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11219-021-09554-1$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,41487,42556,51318</link.rule.ids></links><search><creatorcontrib>Giraldo, Fáber D.</creatorcontrib><creatorcontrib>Chicaiza, Ángela J.</creatorcontrib><creatorcontrib>España, Sergio</creatorcontrib><creatorcontrib>Pastor, Óscar</creatorcontrib><title>Empirical validation of a quality framework for evaluating modelling languages in MDE environments</title><title>Software quality journal</title><addtitle>Software Qual J</addtitle><description>In previous research, we proposed the multiple modelling quality evaluation framework (MMQEF), which is a method and tool for evaluating modelling languages in model-driven engineering (MDE) environments. Rather than being exclusive, MMQEF attempts to complement other methods of evaluation of quality such as SEQUAL. However, to date, MMQEF has not been validated beyond some concept proofs. This paper evaluates the applicability of the MMQEF method in comparison with other existing methods. We performed an evaluation in which the subjects had to detect quality issues in modelling languages. A group of expert professionals and two experimental objects (i.e. two combinations of different modelling languages based on real industrial practices) were used. To analyse the results, we applied quantitative approaches, i.e. statistical tests on the results of the performance measures and the perception of subjects. We ran four replications of the experiment in Colombia between 2016 and 2019, with a total of 50 professionals. The results of the quantitative analysis show a low performance for all of the methods, but a positive perception of MMQEF. Conclusions: The application of modelling language quality evaluation methods within MDE settings is indeed tricky, and subjects did not succeed in identifying all quality problems. This experiment paves the way for additional investigation on the trade-offs between the methods and potential situational guidelines (i.e. circumstances under which each method is convenient). We encourage further inquiries on industrial applications to incrementally improve the method and tailor it to the needs of professionals working in real industrial environments.</description><subject>Compilers</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Empirical analysis</subject><subject>Environment models</subject><subject>Industrial applications</subject><subject>Interpreters</subject><subject>Languages</subject><subject>Methods</subject><subject>Operating Systems</subject><subject>Perception</subject><subject>Programming Languages</subject><subject>Quality assessment</subject><subject>Quantitative analysis</subject><subject>Software engineering</subject><subject>Software Engineering/Programming and Operating Systems</subject><subject>Statistical tests</subject><issn>0963-9314</issn><issn>1573-1367</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kMtOwzAQRS0EEqXwA6wssTZ47MSOl6iUh1TEBtaWmzhRSmK3dlLUv8clSOxYzWh07h3pIHQN9BYolXcRgIEilAGhKs8zAidoBrnkBLiQp2hGleBEccjO0UWMG0qPsWyG1st-24a2NB3em66tzNB6h32NDd6N6TAccB1Mb798-MS1D9gmbEyUa3DvK9t1x60zrhlNYyNuHX59WGLr9m3wrrduiJforDZdtFe_c44-Hpfvi2eyent6WdyvSMlBDaQQojSWc1bIikNRSVqBLZiBnCqbF7SS0mRVaYpMcAE5KEupkIJJxgxj9ZrP0c3Uuw1-N9o46I0fg0svNcu5THSmeKLYRJXBxxhsrbeh7U04aKD6KEVPLnVyqX9cakghPoVigl1jw1_1P6lvfc122A</recordid><startdate>20210601</startdate><enddate>20210601</enddate><creator>Giraldo, Fáber D.</creator><creator>Chicaiza, Ángela J.</creator><creator>España, Sergio</creator><creator>Pastor, Óscar</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0003-2706-6504</orcidid><orcidid>https://orcid.org/0000-0002-6111-3055</orcidid></search><sort><creationdate>20210601</creationdate><title>Empirical validation of a quality framework for evaluating modelling languages in MDE environments</title><author>Giraldo, Fáber D. ; Chicaiza, Ángela J. ; España, Sergio ; Pastor, Óscar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-866cae33287d318d70d1e82a1509e580d77a4dca846361519e006762722a22fb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Compilers</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Empirical analysis</topic><topic>Environment models</topic><topic>Industrial applications</topic><topic>Interpreters</topic><topic>Languages</topic><topic>Methods</topic><topic>Operating Systems</topic><topic>Perception</topic><topic>Programming Languages</topic><topic>Quality assessment</topic><topic>Quantitative analysis</topic><topic>Software engineering</topic><topic>Software Engineering/Programming and Operating Systems</topic><topic>Statistical tests</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Giraldo, Fáber D.</creatorcontrib><creatorcontrib>Chicaiza, Ángela J.</creatorcontrib><creatorcontrib>España, Sergio</creatorcontrib><creatorcontrib>Pastor, Óscar</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>ProQuest research library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Software quality journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Giraldo, Fáber D.</au><au>Chicaiza, Ángela J.</au><au>España, Sergio</au><au>Pastor, Óscar</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Empirical validation of a quality framework for evaluating modelling languages in MDE environments</atitle><jtitle>Software quality journal</jtitle><stitle>Software Qual J</stitle><date>2021-06-01</date><risdate>2021</risdate><volume>29</volume><issue>2</issue><spage>275</spage><epage>307</epage><pages>275-307</pages><issn>0963-9314</issn><eissn>1573-1367</eissn><abstract>In previous research, we proposed the multiple modelling quality evaluation framework (MMQEF), which is a method and tool for evaluating modelling languages in model-driven engineering (MDE) environments. Rather than being exclusive, MMQEF attempts to complement other methods of evaluation of quality such as SEQUAL. However, to date, MMQEF has not been validated beyond some concept proofs. This paper evaluates the applicability of the MMQEF method in comparison with other existing methods. We performed an evaluation in which the subjects had to detect quality issues in modelling languages. A group of expert professionals and two experimental objects (i.e. two combinations of different modelling languages based on real industrial practices) were used. To analyse the results, we applied quantitative approaches, i.e. statistical tests on the results of the performance measures and the perception of subjects. We ran four replications of the experiment in Colombia between 2016 and 2019, with a total of 50 professionals. The results of the quantitative analysis show a low performance for all of the methods, but a positive perception of MMQEF. Conclusions: The application of modelling language quality evaluation methods within MDE settings is indeed tricky, and subjects did not succeed in identifying all quality problems. This experiment paves the way for additional investigation on the trade-offs between the methods and potential situational guidelines (i.e. circumstances under which each method is convenient). We encourage further inquiries on industrial applications to incrementally improve the method and tailor it to the needs of professionals working in real industrial environments.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11219-021-09554-1</doi><tpages>33</tpages><orcidid>https://orcid.org/0000-0003-2706-6504</orcidid><orcidid>https://orcid.org/0000-0002-6111-3055</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0963-9314
ispartof Software quality journal, 2021-06, Vol.29 (2), p.275-307
issn 0963-9314
1573-1367
language eng
recordid cdi_proquest_journals_2537006493
source SpringerLink Journals - AutoHoldings
subjects Compilers
Computer Science
Data Structures and Information Theory
Empirical analysis
Environment models
Industrial applications
Interpreters
Languages
Methods
Operating Systems
Perception
Programming Languages
Quality assessment
Quantitative analysis
Software engineering
Software Engineering/Programming and Operating Systems
Statistical tests
title Empirical validation of a quality framework for evaluating modelling languages in MDE environments
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T17%3A37%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Empirical%20validation%20of%20a%20quality%20framework%20for%20evaluating%20modelling%20languages%20in%20MDE%20environments&rft.jtitle=Software%20quality%20journal&rft.au=Giraldo,%20F%C3%A1ber%20D.&rft.date=2021-06-01&rft.volume=29&rft.issue=2&rft.spage=275&rft.epage=307&rft.pages=275-307&rft.issn=0963-9314&rft.eissn=1573-1367&rft_id=info:doi/10.1007/s11219-021-09554-1&rft_dat=%3Cproquest_cross%3E2537006493%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2537006493&rft_id=info:pmid/&rfr_iscdi=true