Quantitative performance metrics for evaluation and comparison of middleware interoperability products

Interoperability in modeling and simulation is understood as the ability of simulations to exchange information and use the information exchanged. Several alternative models have been applied to support interoperability. The traditional approach has been to apply standards, such as the High-level Ar...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of defense modeling and simulation 2016-04, Vol.13 (2), p.161-169
Hauptverfasser: Diallo, Saikou Y, Gore, Ross J, Barraco, Anthony, Padilla, Jose J, Lynch, Christopher
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 169
container_issue 2
container_start_page 161
container_title Journal of defense modeling and simulation
container_volume 13
creator Diallo, Saikou Y
Gore, Ross J
Barraco, Anthony
Padilla, Jose J
Lynch, Christopher
description Interoperability in modeling and simulation is understood as the ability of simulations to exchange information and use the information exchanged. Several alternative models have been applied to support interoperability. The traditional approach has been to apply standards, such as the High-level Architecture, to create a federation and use a runtime infrastructure to physically connect simulations. Recently, there has been a move towards web-based standards to loosely couple simulations, and the future points to cloud-enabled interoperability services. Despite the existence of implementations of these models from industry, academia and government, very few performance metrics have been formulated to evaluate them. In this paper, we propose quantitative performance metrics that includes a set of dependent and independent variables for implementations of interoperability models. We apply the metrics to a web-based infrastructure that uses web standards and a traditional runtime infrastructure. We analyze the results and discuss the tradeoffs that federation developers have to consider when selecting an implementation of an interoperability model.
doi_str_mv 10.1177/1548512915570143
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1808062412</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_1548512915570143</sage_id><sourcerecordid>1808062412</sourcerecordid><originalsourceid>FETCH-LOGICAL-c314t-1a99d03c98bb34c80fc8ee958247191194bd7a7ac44cdf6b0c070abb5140fdc93</originalsourceid><addsrcrecordid>eNp1UE1LxDAQDaLgunr3mKOXaqZNt8lRFr9AEEHBW5nmQ7K0TU3Slf33ZllPgqd5M_PeG-YRcgnsGqBpbqDmooZSQl03DHh1RBZ7WFSCfRzvMRfFfn9KzmLcMFZzWTULYl9nHJNLmNzW0MkE68OAozJ0MCk4FWkeULPFfs4UP1IcNVV-mDC4mFtv6eC07s03BkPdmEzw2QU717u0o1PwelYpnpMTi300F791Sd7v797Wj8Xzy8PT-va5UBXwVABKqVmlpOi6iivBrBLGyFqUvAEJIHmnG2xQca60XXVMsYZh19XAmdVKVktydfDNh79mE1M7uKhM3-No_BxbEEywVcmhzFR2oKrgYwzGtlNwA4ZdC6zdR9r-jTRLioMk4qdpN34OY37mf_4PVP146w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1808062412</pqid></control><display><type>article</type><title>Quantitative performance metrics for evaluation and comparison of middleware interoperability products</title><source>Access via SAGE</source><creator>Diallo, Saikou Y ; Gore, Ross J ; Barraco, Anthony ; Padilla, Jose J ; Lynch, Christopher</creator><creatorcontrib>Diallo, Saikou Y ; Gore, Ross J ; Barraco, Anthony ; Padilla, Jose J ; Lynch, Christopher</creatorcontrib><description>Interoperability in modeling and simulation is understood as the ability of simulations to exchange information and use the information exchanged. Several alternative models have been applied to support interoperability. The traditional approach has been to apply standards, such as the High-level Architecture, to create a federation and use a runtime infrastructure to physically connect simulations. Recently, there has been a move towards web-based standards to loosely couple simulations, and the future points to cloud-enabled interoperability services. Despite the existence of implementations of these models from industry, academia and government, very few performance metrics have been formulated to evaluate them. In this paper, we propose quantitative performance metrics that includes a set of dependent and independent variables for implementations of interoperability models. We apply the metrics to a web-based infrastructure that uses web standards and a traditional runtime infrastructure. We analyze the results and discuss the tradeoffs that federation developers have to consider when selecting an implementation of an interoperability model.</description><identifier>ISSN: 1548-5129</identifier><identifier>EISSN: 1557-380X</identifier><identifier>DOI: 10.1177/1548512915570143</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><subject>Computer simulation ; Federations ; Infrastructure ; Interoperability ; Performance measurement ; Run time (computers) ; World Wide Web</subject><ispartof>Journal of defense modeling and simulation, 2016-04, Vol.13 (2), p.161-169</ispartof><rights>The Author(s) 2015</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c314t-1a99d03c98bb34c80fc8ee958247191194bd7a7ac44cdf6b0c070abb5140fdc93</citedby><cites>FETCH-LOGICAL-c314t-1a99d03c98bb34c80fc8ee958247191194bd7a7ac44cdf6b0c070abb5140fdc93</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/1548512915570143$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/1548512915570143$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>314,780,784,21819,27924,27925,43621,43622</link.rule.ids></links><search><creatorcontrib>Diallo, Saikou Y</creatorcontrib><creatorcontrib>Gore, Ross J</creatorcontrib><creatorcontrib>Barraco, Anthony</creatorcontrib><creatorcontrib>Padilla, Jose J</creatorcontrib><creatorcontrib>Lynch, Christopher</creatorcontrib><title>Quantitative performance metrics for evaluation and comparison of middleware interoperability products</title><title>Journal of defense modeling and simulation</title><description>Interoperability in modeling and simulation is understood as the ability of simulations to exchange information and use the information exchanged. Several alternative models have been applied to support interoperability. The traditional approach has been to apply standards, such as the High-level Architecture, to create a federation and use a runtime infrastructure to physically connect simulations. Recently, there has been a move towards web-based standards to loosely couple simulations, and the future points to cloud-enabled interoperability services. Despite the existence of implementations of these models from industry, academia and government, very few performance metrics have been formulated to evaluate them. In this paper, we propose quantitative performance metrics that includes a set of dependent and independent variables for implementations of interoperability models. We apply the metrics to a web-based infrastructure that uses web standards and a traditional runtime infrastructure. We analyze the results and discuss the tradeoffs that federation developers have to consider when selecting an implementation of an interoperability model.</description><subject>Computer simulation</subject><subject>Federations</subject><subject>Infrastructure</subject><subject>Interoperability</subject><subject>Performance measurement</subject><subject>Run time (computers)</subject><subject>World Wide Web</subject><issn>1548-5129</issn><issn>1557-380X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><recordid>eNp1UE1LxDAQDaLgunr3mKOXaqZNt8lRFr9AEEHBW5nmQ7K0TU3Slf33ZllPgqd5M_PeG-YRcgnsGqBpbqDmooZSQl03DHh1RBZ7WFSCfRzvMRfFfn9KzmLcMFZzWTULYl9nHJNLmNzW0MkE68OAozJ0MCk4FWkeULPFfs4UP1IcNVV-mDC4mFtv6eC07s03BkPdmEzw2QU717u0o1PwelYpnpMTi300F791Sd7v797Wj8Xzy8PT-va5UBXwVABKqVmlpOi6iivBrBLGyFqUvAEJIHmnG2xQca60XXVMsYZh19XAmdVKVktydfDNh79mE1M7uKhM3-No_BxbEEywVcmhzFR2oKrgYwzGtlNwA4ZdC6zdR9r-jTRLioMk4qdpN34OY37mf_4PVP146w</recordid><startdate>201604</startdate><enddate>201604</enddate><creator>Diallo, Saikou Y</creator><creator>Gore, Ross J</creator><creator>Barraco, Anthony</creator><creator>Padilla, Jose J</creator><creator>Lynch, Christopher</creator><general>SAGE Publications</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>H8D</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>201604</creationdate><title>Quantitative performance metrics for evaluation and comparison of middleware interoperability products</title><author>Diallo, Saikou Y ; Gore, Ross J ; Barraco, Anthony ; Padilla, Jose J ; Lynch, Christopher</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c314t-1a99d03c98bb34c80fc8ee958247191194bd7a7ac44cdf6b0c070abb5140fdc93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Computer simulation</topic><topic>Federations</topic><topic>Infrastructure</topic><topic>Interoperability</topic><topic>Performance measurement</topic><topic>Run time (computers)</topic><topic>World Wide Web</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Diallo, Saikou Y</creatorcontrib><creatorcontrib>Gore, Ross J</creatorcontrib><creatorcontrib>Barraco, Anthony</creatorcontrib><creatorcontrib>Padilla, Jose J</creatorcontrib><creatorcontrib>Lynch, Christopher</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of defense modeling and simulation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Diallo, Saikou Y</au><au>Gore, Ross J</au><au>Barraco, Anthony</au><au>Padilla, Jose J</au><au>Lynch, Christopher</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Quantitative performance metrics for evaluation and comparison of middleware interoperability products</atitle><jtitle>Journal of defense modeling and simulation</jtitle><date>2016-04</date><risdate>2016</risdate><volume>13</volume><issue>2</issue><spage>161</spage><epage>169</epage><pages>161-169</pages><issn>1548-5129</issn><eissn>1557-380X</eissn><abstract>Interoperability in modeling and simulation is understood as the ability of simulations to exchange information and use the information exchanged. Several alternative models have been applied to support interoperability. The traditional approach has been to apply standards, such as the High-level Architecture, to create a federation and use a runtime infrastructure to physically connect simulations. Recently, there has been a move towards web-based standards to loosely couple simulations, and the future points to cloud-enabled interoperability services. Despite the existence of implementations of these models from industry, academia and government, very few performance metrics have been formulated to evaluate them. In this paper, we propose quantitative performance metrics that includes a set of dependent and independent variables for implementations of interoperability models. We apply the metrics to a web-based infrastructure that uses web standards and a traditional runtime infrastructure. We analyze the results and discuss the tradeoffs that federation developers have to consider when selecting an implementation of an interoperability model.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/1548512915570143</doi><tpages>9</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1548-5129
ispartof Journal of defense modeling and simulation, 2016-04, Vol.13 (2), p.161-169
issn 1548-5129
1557-380X
language eng
recordid cdi_proquest_miscellaneous_1808062412
source Access via SAGE
subjects Computer simulation
Federations
Infrastructure
Interoperability
Performance measurement
Run time (computers)
World Wide Web
title Quantitative performance metrics for evaluation and comparison of middleware interoperability products
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T12%3A56%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Quantitative%20performance%20metrics%20for%20evaluation%20and%20comparison%20of%20middleware%20interoperability%20products&rft.jtitle=Journal%20of%20defense%20modeling%20and%20simulation&rft.au=Diallo,%20Saikou%20Y&rft.date=2016-04&rft.volume=13&rft.issue=2&rft.spage=161&rft.epage=169&rft.pages=161-169&rft.issn=1548-5129&rft.eissn=1557-380X&rft_id=info:doi/10.1177/1548512915570143&rft_dat=%3Cproquest_cross%3E1808062412%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1808062412&rft_id=info:pmid/&rft_sage_id=10.1177_1548512915570143&rfr_iscdi=true