Reconsidering the Equivalence of Multisource Performance Ratings: Evidence for the Importance and Meaning of Rater Factors
Purpose The study specified an alternate model to examine the measurement invariance of multisource performance ratings (MSPRs) to systematically investigate the theoretical meaning of common method variance in the form of rater effects. As opposed to testing invariance based on a multigroup design...
Gespeichert in:
Veröffentlicht in: | Journal of business and psychology 2013-06, Vol.28 (2), p.203-219 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 219 |
---|---|
container_issue | 2 |
container_start_page | 203 |
container_title | Journal of business and psychology |
container_volume | 28 |
creator | Bynum, Bethany H. Hoffman, Brian J. Meade, Adam W. Gentry, William A. |
description | Purpose The study specified an alternate model to examine the measurement invariance of multisource performance ratings (MSPRs) to systematically investigate the theoretical meaning of common method variance in the form of rater effects. As opposed to testing invariance based on a multigroup design with raters aggregated within sources, this study specified both performance dimension and idiosyncratic rater factors. Design/Methodology/Approach Data was obtained from 5,278 managers from a wide range of organizations and hierarchical levels, who were rated on the BENCHMARKS® MSPR instrument. Findings Our results diverged from prior research such that MSPRs were found to lack invariance for raters from different levels. However, same level raters provided equivalent ratings in terms of both the performance dimension loadings and rater factor loadings. Implications The results illustrate the importance of modeling rater factors when investigating invariance and suggest that rater factors reflect substantively meaningful variance, not bias. Originality/Value The current study applies an alternative model to examine invariance of MSPRs that allowed us to answer three questions that would not be possible with more traditional multigroup designs. First, the model allowed us to examine the impact of paramaterizing idiosyncratic rater factors on inferences of cross-rater invariance. Next, including multiple raters from each organizational level in the MSPR model allowed us to tease apart the degree of invariance in raters from the same source, relative to raters from different sources. Finally, our study allowed for inferences with respect to the invariance of idiosyncratic rater factors. |
doi_str_mv | 10.1007/s10869-012-9272-7 |
format | Article |
fullrecord | <record><control><sourceid>jstor_proqu</sourceid><recordid>TN_cdi_proquest_journals_1356854255</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><jstor_id>24709834</jstor_id><sourcerecordid>24709834</sourcerecordid><originalsourceid>FETCH-LOGICAL-c369t-1551b75202ce0cb79ac11d62e6c0957c6513662f74ecffb7a82ee873e814b2f73</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKs_wIOw4Hk1H5uP9Sal1UKLUhS8hWw6q1vaTZvsFvTXm-2KePIUMjPPM8yL0CXBNwRjeRsIViJPMaFpTiVN5REaEC5Zyjh7O0YDrFSeMirUKToLYYUx5kTgAfpagHV1qJbgq_o9aT4gGe_aam_WUFtIXJnM23VTBdf6-H0GXzq_MV1rYZpIhLtkvI90V4mtg2C62TrfHIZMvUzmYOrOHV2RAZ9MjG2cD-fopDTrABc_7xC9TsYvo8d09vQwHd3PUstE3qSEc1JITjG1gG0hc2MJWQoKwuKcSys4YULQUmZgy7KQRlEAJRkokhWxzIbouvduvdu1EBq9itfUcaUmjAvFM8p5nCL9lPUuBA-l3vpqY_ynJlh3Ees-Yh0j1l3EujPTngnbLj3wf8z_QFc9tAoxhd8tNJM4Vyxj38KfiVc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1356854255</pqid></control><display><type>article</type><title>Reconsidering the Equivalence of Multisource Performance Ratings: Evidence for the Importance and Meaning of Rater Factors</title><source>Business Source Complete</source><source>Jstor Complete Legacy</source><source>Springer Nature - Complete Springer Journals</source><creator>Bynum, Bethany H. ; Hoffman, Brian J. ; Meade, Adam W. ; Gentry, William A.</creator><creatorcontrib>Bynum, Bethany H. ; Hoffman, Brian J. ; Meade, Adam W. ; Gentry, William A.</creatorcontrib><description>Purpose The study specified an alternate model to examine the measurement invariance of multisource performance ratings (MSPRs) to systematically investigate the theoretical meaning of common method variance in the form of rater effects. As opposed to testing invariance based on a multigroup design with raters aggregated within sources, this study specified both performance dimension and idiosyncratic rater factors. Design/Methodology/Approach Data was obtained from 5,278 managers from a wide range of organizations and hierarchical levels, who were rated on the BENCHMARKS® MSPR instrument. Findings Our results diverged from prior research such that MSPRs were found to lack invariance for raters from different levels. However, same level raters provided equivalent ratings in terms of both the performance dimension loadings and rater factor loadings. Implications The results illustrate the importance of modeling rater factors when investigating invariance and suggest that rater factors reflect substantively meaningful variance, not bias. Originality/Value The current study applies an alternative model to examine invariance of MSPRs that allowed us to answer three questions that would not be possible with more traditional multigroup designs. First, the model allowed us to examine the impact of paramaterizing idiosyncratic rater factors on inferences of cross-rater invariance. Next, including multiple raters from each organizational level in the MSPR model allowed us to tease apart the degree of invariance in raters from the same source, relative to raters from different sources. Finally, our study allowed for inferences with respect to the invariance of idiosyncratic rater factors.</description><identifier>ISSN: 0889-3268</identifier><identifier>EISSN: 1573-353X</identifier><identifier>DOI: 10.1007/s10869-012-9272-7</identifier><language>eng</language><publisher>Boston: Springer</publisher><subject>Applied psychology ; Behavioral Science and Psychology ; Benchmarks ; Bias ; Business and Management ; Community and Environmental Psychology ; Conceptualization ; Data models ; Human performance technology ; Human resource management ; Hypotheses ; Industrial and Organizational Psychology ; Investigations ; Job performance ; Managers ; Mathematical models ; Modeling ; Personality and Social Psychology ; Psychology ; Psychometrics ; Ratings & rankings ; Research methods ; Social Sciences ; Statistical variance ; Studies ; Subordinate personnel</subject><ispartof>Journal of business and psychology, 2013-06, Vol.28 (2), p.203-219</ispartof><rights>Springer Science+Business Media, LLC 2012</rights><rights>Springer Science+Business Media New York 2013</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c369t-1551b75202ce0cb79ac11d62e6c0957c6513662f74ecffb7a82ee873e814b2f73</citedby><cites>FETCH-LOGICAL-c369t-1551b75202ce0cb79ac11d62e6c0957c6513662f74ecffb7a82ee873e814b2f73</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/24709834$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/24709834$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>314,777,781,800,27905,27906,41469,42538,51300,57998,58231</link.rule.ids></links><search><creatorcontrib>Bynum, Bethany H.</creatorcontrib><creatorcontrib>Hoffman, Brian J.</creatorcontrib><creatorcontrib>Meade, Adam W.</creatorcontrib><creatorcontrib>Gentry, William A.</creatorcontrib><title>Reconsidering the Equivalence of Multisource Performance Ratings: Evidence for the Importance and Meaning of Rater Factors</title><title>Journal of business and psychology</title><addtitle>J Bus Psychol</addtitle><description>Purpose The study specified an alternate model to examine the measurement invariance of multisource performance ratings (MSPRs) to systematically investigate the theoretical meaning of common method variance in the form of rater effects. As opposed to testing invariance based on a multigroup design with raters aggregated within sources, this study specified both performance dimension and idiosyncratic rater factors. Design/Methodology/Approach Data was obtained from 5,278 managers from a wide range of organizations and hierarchical levels, who were rated on the BENCHMARKS® MSPR instrument. Findings Our results diverged from prior research such that MSPRs were found to lack invariance for raters from different levels. However, same level raters provided equivalent ratings in terms of both the performance dimension loadings and rater factor loadings. Implications The results illustrate the importance of modeling rater factors when investigating invariance and suggest that rater factors reflect substantively meaningful variance, not bias. Originality/Value The current study applies an alternative model to examine invariance of MSPRs that allowed us to answer three questions that would not be possible with more traditional multigroup designs. First, the model allowed us to examine the impact of paramaterizing idiosyncratic rater factors on inferences of cross-rater invariance. Next, including multiple raters from each organizational level in the MSPR model allowed us to tease apart the degree of invariance in raters from the same source, relative to raters from different sources. Finally, our study allowed for inferences with respect to the invariance of idiosyncratic rater factors.</description><subject>Applied psychology</subject><subject>Behavioral Science and Psychology</subject><subject>Benchmarks</subject><subject>Bias</subject><subject>Business and Management</subject><subject>Community and Environmental Psychology</subject><subject>Conceptualization</subject><subject>Data models</subject><subject>Human performance technology</subject><subject>Human resource management</subject><subject>Hypotheses</subject><subject>Industrial and Organizational Psychology</subject><subject>Investigations</subject><subject>Job performance</subject><subject>Managers</subject><subject>Mathematical models</subject><subject>Modeling</subject><subject>Personality and Social Psychology</subject><subject>Psychology</subject><subject>Psychometrics</subject><subject>Ratings & rankings</subject><subject>Research methods</subject><subject>Social Sciences</subject><subject>Statistical variance</subject><subject>Studies</subject><subject>Subordinate personnel</subject><issn>0889-3268</issn><issn>1573-353X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kE1LAzEQhoMoWKs_wIOw4Hk1H5uP9Sal1UKLUhS8hWw6q1vaTZvsFvTXm-2KePIUMjPPM8yL0CXBNwRjeRsIViJPMaFpTiVN5REaEC5Zyjh7O0YDrFSeMirUKToLYYUx5kTgAfpagHV1qJbgq_o9aT4gGe_aam_WUFtIXJnM23VTBdf6-H0GXzq_MV1rYZpIhLtkvI90V4mtg2C62TrfHIZMvUzmYOrOHV2RAZ9MjG2cD-fopDTrABc_7xC9TsYvo8d09vQwHd3PUstE3qSEc1JITjG1gG0hc2MJWQoKwuKcSys4YULQUmZgy7KQRlEAJRkokhWxzIbouvduvdu1EBq9itfUcaUmjAvFM8p5nCL9lPUuBA-l3vpqY_ynJlh3Ees-Yh0j1l3EujPTngnbLj3wf8z_QFc9tAoxhd8tNJM4Vyxj38KfiVc</recordid><startdate>20130601</startdate><enddate>20130601</enddate><creator>Bynum, Bethany H.</creator><creator>Hoffman, Brian J.</creator><creator>Meade, Adam W.</creator><creator>Gentry, William A.</creator><general>Springer</general><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>0U~</scope><scope>1-H</scope><scope>3V.</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>88C</scope><scope>88G</scope><scope>8AO</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>FYUFA</scope><scope>F~G</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>K60</scope><scope>K6~</scope><scope>L.-</scope><scope>L.0</scope><scope>M0C</scope><scope>M0T</scope><scope>M2M</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>Q9U</scope></search><sort><creationdate>20130601</creationdate><title>Reconsidering the Equivalence of Multisource Performance Ratings: Evidence for the Importance and Meaning of Rater Factors</title><author>Bynum, Bethany H. ; Hoffman, Brian J. ; Meade, Adam W. ; Gentry, William A.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c369t-1551b75202ce0cb79ac11d62e6c0957c6513662f74ecffb7a82ee873e814b2f73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Applied psychology</topic><topic>Behavioral Science and Psychology</topic><topic>Benchmarks</topic><topic>Bias</topic><topic>Business and Management</topic><topic>Community and Environmental Psychology</topic><topic>Conceptualization</topic><topic>Data models</topic><topic>Human performance technology</topic><topic>Human resource management</topic><topic>Hypotheses</topic><topic>Industrial and Organizational Psychology</topic><topic>Investigations</topic><topic>Job performance</topic><topic>Managers</topic><topic>Mathematical models</topic><topic>Modeling</topic><topic>Personality and Social Psychology</topic><topic>Psychology</topic><topic>Psychometrics</topic><topic>Ratings & rankings</topic><topic>Research methods</topic><topic>Social Sciences</topic><topic>Statistical variance</topic><topic>Studies</topic><topic>Subordinate personnel</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bynum, Bethany H.</creatorcontrib><creatorcontrib>Hoffman, Brian J.</creatorcontrib><creatorcontrib>Meade, Adam W.</creatorcontrib><creatorcontrib>Gentry, William A.</creatorcontrib><collection>CrossRef</collection><collection>Global News & ABI/Inform Professional</collection><collection>Trade PRO</collection><collection>ProQuest Central (Corporate)</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Healthcare Administration Database (Alumni)</collection><collection>Psychology Database (Alumni)</collection><collection>ProQuest Pharma Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>Health Research Premium Collection</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ABI/INFORM Professional Standard</collection><collection>ABI/INFORM Global</collection><collection>Healthcare Administration Database</collection><collection>ProQuest Psychology</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of business and psychology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bynum, Bethany H.</au><au>Hoffman, Brian J.</au><au>Meade, Adam W.</au><au>Gentry, William A.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reconsidering the Equivalence of Multisource Performance Ratings: Evidence for the Importance and Meaning of Rater Factors</atitle><jtitle>Journal of business and psychology</jtitle><stitle>J Bus Psychol</stitle><date>2013-06-01</date><risdate>2013</risdate><volume>28</volume><issue>2</issue><spage>203</spage><epage>219</epage><pages>203-219</pages><issn>0889-3268</issn><eissn>1573-353X</eissn><abstract>Purpose The study specified an alternate model to examine the measurement invariance of multisource performance ratings (MSPRs) to systematically investigate the theoretical meaning of common method variance in the form of rater effects. As opposed to testing invariance based on a multigroup design with raters aggregated within sources, this study specified both performance dimension and idiosyncratic rater factors. Design/Methodology/Approach Data was obtained from 5,278 managers from a wide range of organizations and hierarchical levels, who were rated on the BENCHMARKS® MSPR instrument. Findings Our results diverged from prior research such that MSPRs were found to lack invariance for raters from different levels. However, same level raters provided equivalent ratings in terms of both the performance dimension loadings and rater factor loadings. Implications The results illustrate the importance of modeling rater factors when investigating invariance and suggest that rater factors reflect substantively meaningful variance, not bias. Originality/Value The current study applies an alternative model to examine invariance of MSPRs that allowed us to answer three questions that would not be possible with more traditional multigroup designs. First, the model allowed us to examine the impact of paramaterizing idiosyncratic rater factors on inferences of cross-rater invariance. Next, including multiple raters from each organizational level in the MSPR model allowed us to tease apart the degree of invariance in raters from the same source, relative to raters from different sources. Finally, our study allowed for inferences with respect to the invariance of idiosyncratic rater factors.</abstract><cop>Boston</cop><pub>Springer</pub><doi>10.1007/s10869-012-9272-7</doi><tpages>17</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0889-3268 |
ispartof | Journal of business and psychology, 2013-06, Vol.28 (2), p.203-219 |
issn | 0889-3268 1573-353X |
language | eng |
recordid | cdi_proquest_journals_1356854255 |
source | Business Source Complete; Jstor Complete Legacy; Springer Nature - Complete Springer Journals |
subjects | Applied psychology Behavioral Science and Psychology Benchmarks Bias Business and Management Community and Environmental Psychology Conceptualization Data models Human performance technology Human resource management Hypotheses Industrial and Organizational Psychology Investigations Job performance Managers Mathematical models Modeling Personality and Social Psychology Psychology Psychometrics Ratings & rankings Research methods Social Sciences Statistical variance Studies Subordinate personnel |
title | Reconsidering the Equivalence of Multisource Performance Ratings: Evidence for the Importance and Meaning of Rater Factors |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T00%3A30%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reconsidering%20the%20Equivalence%20of%20Multisource%20Performance%20Ratings:%20Evidence%20for%20the%20Importance%20and%20Meaning%20of%20Rater%20Factors&rft.jtitle=Journal%20of%20business%20and%20psychology&rft.au=Bynum,%20Bethany%20H.&rft.date=2013-06-01&rft.volume=28&rft.issue=2&rft.spage=203&rft.epage=219&rft.pages=203-219&rft.issn=0889-3268&rft.eissn=1573-353X&rft_id=info:doi/10.1007/s10869-012-9272-7&rft_dat=%3Cjstor_proqu%3E24709834%3C/jstor_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1356854255&rft_id=info:pmid/&rft_jstor_id=24709834&rfr_iscdi=true |