How to — and How Not to — Evaluate Innovation
Many traditional evaluation methods, including most performance measurement approaches, inhibit rather than support actual innovation. This article discusses the nature of innovation, identifies limitations of traditional evaluation approaches for assessing innovation and proposes an alternative mod...
Gespeichert in:
Veröffentlicht in: | Evaluation (London, England. 1995) England. 1995), 2002-01, Vol.8 (1), p.13-28 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 28 |
---|---|
container_issue | 1 |
container_start_page | 13 |
container_title | Evaluation (London, England. 1995) |
container_volume | 8 |
creator | Perrin, Burt |
description | Many traditional evaluation methods, including most performance measurement approaches, inhibit rather than support actual innovation. This article discusses the nature of innovation, identifies limitations of traditional evaluation approaches for assessing innovation and proposes an alternative model of evaluation consistent with the nature of innovation.
Most attempts at innovation, by definition, are risky and should `fail' — otherwise they are using safe, rather than unknown or truly innovative approaches. A few key impacts by a minority of projects or participants may be much more meaningful than changes in mean (or average) scores. Yet the most common measure of programme impact is the mean. In contrast, this article suggests that evaluation of innovation should identify the minority of situations where real impact has occurred and the reasons for this. This is in keeping with the approach venture capitalists typically take where they expect most of their investments to `fail', but to be compensated by major gains on just a few. |
doi_str_mv | 10.1177/1358902002008001514 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_57766605</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_1358902002008001514</sage_id><sourcerecordid>38381114</sourcerecordid><originalsourceid>FETCH-LOGICAL-c267t-324c6b959e9a468bbab9961c741e636282bcf0feaedebee8d991adff0ac93de43</originalsourceid><addsrcrecordid>eNqFkM1Kw0AUhQdRsFafwE1W7qJzM5m_pZRqC6IbXQ83kxtpSTM1k1Tc-RA-oU9iQnUnChfuD985cA9j58AvAbS-AiGN5Rkfy3AOEvIDNoFcQapBisNhFlKlYoCO2UmM64FRmYQJg0V4TbqQfL5_JNiUybjeh-7nNN9h3WNHybJpwg67VWhO2VGFdaSz7z5lTzfzx9kivXu4Xc6u71KfKd2lIsu9Kqy0ZDFXpiiwsFaB1zmQEiozWeErXhFSSQWRKa0FLKuKo7eipFxM2cXed9uGl55i5zar6KmusaHQRye1Vkpx-S8ojDAAMDqKPejbEGNLldu2qw22bw64G3N0v-Q4qPheFfGZ3Dr0bTO8_afkC2H6cyU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>38381114</pqid></control><display><type>article</type><title>How to — and How Not to — Evaluate Innovation</title><source>Applied Social Sciences Index & Abstracts (ASSIA)</source><source>SAGE Complete A-Z List</source><creator>Perrin, Burt</creator><creatorcontrib>Perrin, Burt</creatorcontrib><description>Many traditional evaluation methods, including most performance measurement approaches, inhibit rather than support actual innovation. This article discusses the nature of innovation, identifies limitations of traditional evaluation approaches for assessing innovation and proposes an alternative model of evaluation consistent with the nature of innovation.
Most attempts at innovation, by definition, are risky and should `fail' — otherwise they are using safe, rather than unknown or truly innovative approaches. A few key impacts by a minority of projects or participants may be much more meaningful than changes in mean (or average) scores. Yet the most common measure of programme impact is the mean. In contrast, this article suggests that evaluation of innovation should identify the minority of situations where real impact has occurred and the reasons for this. This is in keeping with the approach venture capitalists typically take where they expect most of their investments to `fail', but to be compensated by major gains on just a few.</description><identifier>ISSN: 1356-3890</identifier><identifier>EISSN: 1461-7153</identifier><identifier>DOI: 10.1177/1358902002008001514</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><subject>Evaluation ; Innovation ; Learning ; Measurement ; Research and development</subject><ispartof>Evaluation (London, England. 1995), 2002-01, Vol.8 (1), p.13-28</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c267t-324c6b959e9a468bbab9961c741e636282bcf0feaedebee8d991adff0ac93de43</citedby><cites>FETCH-LOGICAL-c267t-324c6b959e9a468bbab9961c741e636282bcf0feaedebee8d991adff0ac93de43</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/1358902002008001514$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/1358902002008001514$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>314,776,780,21799,27903,27904,30979,43600,43601</link.rule.ids></links><search><creatorcontrib>Perrin, Burt</creatorcontrib><title>How to — and How Not to — Evaluate Innovation</title><title>Evaluation (London, England. 1995)</title><description>Many traditional evaluation methods, including most performance measurement approaches, inhibit rather than support actual innovation. This article discusses the nature of innovation, identifies limitations of traditional evaluation approaches for assessing innovation and proposes an alternative model of evaluation consistent with the nature of innovation.
Most attempts at innovation, by definition, are risky and should `fail' — otherwise they are using safe, rather than unknown or truly innovative approaches. A few key impacts by a minority of projects or participants may be much more meaningful than changes in mean (or average) scores. Yet the most common measure of programme impact is the mean. In contrast, this article suggests that evaluation of innovation should identify the minority of situations where real impact has occurred and the reasons for this. This is in keeping with the approach venture capitalists typically take where they expect most of their investments to `fail', but to be compensated by major gains on just a few.</description><subject>Evaluation</subject><subject>Innovation</subject><subject>Learning</subject><subject>Measurement</subject><subject>Research and development</subject><issn>1356-3890</issn><issn>1461-7153</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2002</creationdate><recordtype>article</recordtype><sourceid>7QJ</sourceid><recordid>eNqFkM1Kw0AUhQdRsFafwE1W7qJzM5m_pZRqC6IbXQ83kxtpSTM1k1Tc-RA-oU9iQnUnChfuD985cA9j58AvAbS-AiGN5Rkfy3AOEvIDNoFcQapBisNhFlKlYoCO2UmM64FRmYQJg0V4TbqQfL5_JNiUybjeh-7nNN9h3WNHybJpwg67VWhO2VGFdaSz7z5lTzfzx9kivXu4Xc6u71KfKd2lIsu9Kqy0ZDFXpiiwsFaB1zmQEiozWeErXhFSSQWRKa0FLKuKo7eipFxM2cXed9uGl55i5zar6KmusaHQRye1Vkpx-S8ojDAAMDqKPejbEGNLldu2qw22bw64G3N0v-Q4qPheFfGZ3Dr0bTO8_afkC2H6cyU</recordid><startdate>200201</startdate><enddate>200201</enddate><creator>Perrin, Burt</creator><general>SAGE Publications</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8BJ</scope><scope>FQK</scope><scope>JBE</scope><scope>7QJ</scope></search><sort><creationdate>200201</creationdate><title>How to — and How Not to — Evaluate Innovation</title><author>Perrin, Burt</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c267t-324c6b959e9a468bbab9961c741e636282bcf0feaedebee8d991adff0ac93de43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2002</creationdate><topic>Evaluation</topic><topic>Innovation</topic><topic>Learning</topic><topic>Measurement</topic><topic>Research and development</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Perrin, Burt</creatorcontrib><collection>CrossRef</collection><collection>International Bibliography of the Social Sciences (IBSS)</collection><collection>International Bibliography of the Social Sciences</collection><collection>International Bibliography of the Social Sciences</collection><collection>Applied Social Sciences Index & Abstracts (ASSIA)</collection><jtitle>Evaluation (London, England. 1995)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Perrin, Burt</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>How to — and How Not to — Evaluate Innovation</atitle><jtitle>Evaluation (London, England. 1995)</jtitle><date>2002-01</date><risdate>2002</risdate><volume>8</volume><issue>1</issue><spage>13</spage><epage>28</epage><pages>13-28</pages><issn>1356-3890</issn><eissn>1461-7153</eissn><abstract>Many traditional evaluation methods, including most performance measurement approaches, inhibit rather than support actual innovation. This article discusses the nature of innovation, identifies limitations of traditional evaluation approaches for assessing innovation and proposes an alternative model of evaluation consistent with the nature of innovation.
Most attempts at innovation, by definition, are risky and should `fail' — otherwise they are using safe, rather than unknown or truly innovative approaches. A few key impacts by a minority of projects or participants may be much more meaningful than changes in mean (or average) scores. Yet the most common measure of programme impact is the mean. In contrast, this article suggests that evaluation of innovation should identify the minority of situations where real impact has occurred and the reasons for this. This is in keeping with the approach venture capitalists typically take where they expect most of their investments to `fail', but to be compensated by major gains on just a few.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/1358902002008001514</doi><tpages>16</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1356-3890 |
ispartof | Evaluation (London, England. 1995), 2002-01, Vol.8 (1), p.13-28 |
issn | 1356-3890 1461-7153 |
language | eng |
recordid | cdi_proquest_miscellaneous_57766605 |
source | Applied Social Sciences Index & Abstracts (ASSIA); SAGE Complete A-Z List |
subjects | Evaluation Innovation Learning Measurement Research and development |
title | How to — and How Not to — Evaluate Innovation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T07%3A36%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=How%20to%20%E2%80%94%20and%20How%20Not%20to%20%E2%80%94%20Evaluate%20Innovation&rft.jtitle=Evaluation%20(London,%20England.%201995)&rft.au=Perrin,%20Burt&rft.date=2002-01&rft.volume=8&rft.issue=1&rft.spage=13&rft.epage=28&rft.pages=13-28&rft.issn=1356-3890&rft.eissn=1461-7153&rft_id=info:doi/10.1177/1358902002008001514&rft_dat=%3Cproquest_cross%3E38381114%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=38381114&rft_id=info:pmid/&rft_sage_id=10.1177_1358902002008001514&rfr_iscdi=true |