How Credible Is the Evidence, and Does It Matter? An Analysis of the Program Assessment Rating Tool

This research empirically assesses the quality of evidence that agencies provided to the Office of Management and Budget in the application of the Program Assessment Rating Tool (PART), introduced in 2002 to more rigorously, systematically, and transparently assess public program effectiveness and h...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Public administration review 2012-01, Vol.72 (1), p.123-134
1. Verfasser: Heinrich, Carolyn J.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 134
container_issue 1
container_start_page 123
container_title Public administration review
container_volume 72
creator Heinrich, Carolyn J.
description This research empirically assesses the quality of evidence that agencies provided to the Office of Management and Budget in the application of the Program Assessment Rating Tool (PART), introduced in 2002 to more rigorously, systematically, and transparently assess public program effectiveness and hold agencies accountable for results by tying them to the executive budget formulation process and program funding. Evidence submitted by 95 programs administered by the U.S. Department of Health and Human Services for the PART assessment is analyzed using measures that capture the quality of evidence and methods used by programs and information on characteristics of agencies that might relate to program results and government funding decisions. The study finds that of those programs offering some evidence, most was internal and qualitative, and about half did not assess how their performance compared to other government or private programs with similar objectives. Programs were least likely to provide externally generated evidence of their performance relative to long-term and annual performance goals. Importantly, overall PART and results scores were (statistically) significantly lower for programs that failed to provide quantitative evidence and did not use long-term measures, baseline measures or targets, or independent evaluations. Although the PART program results ratings and overall PART scores had no discernible consequences for program funding over time, the PART assessments appeared to take seriously the evaluation of evidence quality, a positive step forward in recent efforts to base policy decisions on more rigorous evidence.
doi_str_mv 10.1111/j.1540-6210.2011.02490.x
format Article
fullrecord <record><control><sourceid>jstor_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_1037873930</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><jstor_id>41433149</jstor_id><sourcerecordid>41433149</sourcerecordid><originalsourceid>FETCH-LOGICAL-c5780-113e90a8fb3040dbbb02169ead72eed74e1b9c5cb3b06c106597edb1ca44c99e3</originalsourceid><addsrcrecordid>eNqNkVFv0zAUhSMEEmXwE5AseOGBdPfaThy_gKqybpUGTGMTEi-Wk9yOlDQedsrafz9nQX1AQpplyZbPd46le5KEIUwxruP1FDMJac7jAwfEKXCpYbp7kkwOwtNkAiBEKkTGnycvQlgDIEdZTJLqzN2xuae6KVtiy8D6n8RO_jQ1dRW9Z7ar2SdHgS179tn2PfmPbNbFbdt9aAJzqwfDhXc33m7YLAQKYUNdzy5t33Q37Mq59mXybGXbQK_-nkfJ9eLkan6Wnn89Xc5n52mVqQJSREEabLEqBUioy7IEjrkmWytOVCtJWOoqq0pRQl4h5JlWVJdYWSkrrUkcJe_G3Fvvfm8p9GbThIra1nbktsEgCFUooQU8AkUsMpC5iuibf9C12_o4gGC0hEJlnPMIvf0fhEornmeFyiNVjFTlXQieVubWNxvr9_E_M7Rp1mYozQylmaFN89Cm2UXrh9F617S0f7TPXFzPLodrDHg9BqxD7_whQKIUAqWOejrqTehpd9Ct_2XiEFRmvn85Nbj4sYBvcmGkuAdzqLsI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1797265876</pqid></control><display><type>article</type><title>How Credible Is the Evidence, and Does It Matter? An Analysis of the Program Assessment Rating Tool</title><source>Wiley Journals</source><source>Worldwide Political Science Abstracts</source><source>EBSCOhost Business Source Complete</source><source>JSTOR Archive Collection A-Z Listing</source><source>EBSCOhost Political Science Complete</source><source>EBSCOhost Education Source</source><creator>Heinrich, Carolyn J.</creator><creatorcontrib>Heinrich, Carolyn J.</creatorcontrib><description>This research empirically assesses the quality of evidence that agencies provided to the Office of Management and Budget in the application of the Program Assessment Rating Tool (PART), introduced in 2002 to more rigorously, systematically, and transparently assess public program effectiveness and hold agencies accountable for results by tying them to the executive budget formulation process and program funding. Evidence submitted by 95 programs administered by the U.S. Department of Health and Human Services for the PART assessment is analyzed using measures that capture the quality of evidence and methods used by programs and information on characteristics of agencies that might relate to program results and government funding decisions. The study finds that of those programs offering some evidence, most was internal and qualitative, and about half did not assess how their performance compared to other government or private programs with similar objectives. Programs were least likely to provide externally generated evidence of their performance relative to long-term and annual performance goals. Importantly, overall PART and results scores were (statistically) significantly lower for programs that failed to provide quantitative evidence and did not use long-term measures, baseline measures or targets, or independent evaluations. Although the PART program results ratings and overall PART scores had no discernible consequences for program funding over time, the PART assessments appeared to take seriously the evaluation of evidence quality, a positive step forward in recent efforts to base policy decisions on more rigorous evidence.</description><identifier>ISSN: 0033-3352</identifier><identifier>EISSN: 1540-6210</identifier><identifier>DOI: 10.1111/j.1540-6210.2011.02490.x</identifier><identifier>CODEN: PBARBM</identifier><language>eng</language><publisher>Oxford, UK: Blackwell Publishing Ltd</publisher><subject>Budget allocation ; Budgeting ; Budgets ; Bureaucracy ; Departments ; Effectiveness ; Empirical evidence ; Evaluation ; Evidence ; Funding ; Government agencies ; Government budgets ; Government performance ; Government reform ; Health care policy ; Health Care Services Policy ; Human Services ; Instructional Effectiveness ; Management ; Methodology ; Performance management ; Program Evaluation ; Public administration ; Public budgeting ; Qualitative research ; Quality ; Quality of Health Care ; Quality standards ; Quantitative analysis ; Rating ; Ratings &amp; rankings ; Statistical significance ; Strategic planning ; Studies ; U.S.A ; United States federal budget</subject><ispartof>Public administration review, 2012-01, Vol.72 (1), p.123-134</ispartof><rights>2012 American Society for Public Administration</rights><rights>Copyright © 2011 The American Society for Public Administration</rights><rights>Copyright Blackwell Publishing Ltd. Jan/Feb 2012</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c5780-113e90a8fb3040dbbb02169ead72eed74e1b9c5cb3b06c106597edb1ca44c99e3</citedby><cites>FETCH-LOGICAL-c5780-113e90a8fb3040dbbb02169ead72eed74e1b9c5cb3b06c106597edb1ca44c99e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/41433149$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/41433149$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>314,780,784,803,1417,27924,27925,45574,45575,58017,58250</link.rule.ids></links><search><creatorcontrib>Heinrich, Carolyn J.</creatorcontrib><title>How Credible Is the Evidence, and Does It Matter? An Analysis of the Program Assessment Rating Tool</title><title>Public administration review</title><description>This research empirically assesses the quality of evidence that agencies provided to the Office of Management and Budget in the application of the Program Assessment Rating Tool (PART), introduced in 2002 to more rigorously, systematically, and transparently assess public program effectiveness and hold agencies accountable for results by tying them to the executive budget formulation process and program funding. Evidence submitted by 95 programs administered by the U.S. Department of Health and Human Services for the PART assessment is analyzed using measures that capture the quality of evidence and methods used by programs and information on characteristics of agencies that might relate to program results and government funding decisions. The study finds that of those programs offering some evidence, most was internal and qualitative, and about half did not assess how their performance compared to other government or private programs with similar objectives. Programs were least likely to provide externally generated evidence of their performance relative to long-term and annual performance goals. Importantly, overall PART and results scores were (statistically) significantly lower for programs that failed to provide quantitative evidence and did not use long-term measures, baseline measures or targets, or independent evaluations. Although the PART program results ratings and overall PART scores had no discernible consequences for program funding over time, the PART assessments appeared to take seriously the evaluation of evidence quality, a positive step forward in recent efforts to base policy decisions on more rigorous evidence.</description><subject>Budget allocation</subject><subject>Budgeting</subject><subject>Budgets</subject><subject>Bureaucracy</subject><subject>Departments</subject><subject>Effectiveness</subject><subject>Empirical evidence</subject><subject>Evaluation</subject><subject>Evidence</subject><subject>Funding</subject><subject>Government agencies</subject><subject>Government budgets</subject><subject>Government performance</subject><subject>Government reform</subject><subject>Health care policy</subject><subject>Health Care Services Policy</subject><subject>Human Services</subject><subject>Instructional Effectiveness</subject><subject>Management</subject><subject>Methodology</subject><subject>Performance management</subject><subject>Program Evaluation</subject><subject>Public administration</subject><subject>Public budgeting</subject><subject>Qualitative research</subject><subject>Quality</subject><subject>Quality of Health Care</subject><subject>Quality standards</subject><subject>Quantitative analysis</subject><subject>Rating</subject><subject>Ratings &amp; rankings</subject><subject>Statistical significance</subject><subject>Strategic planning</subject><subject>Studies</subject><subject>U.S.A</subject><subject>United States federal budget</subject><issn>0033-3352</issn><issn>1540-6210</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2012</creationdate><recordtype>article</recordtype><sourceid>7UB</sourceid><recordid>eNqNkVFv0zAUhSMEEmXwE5AseOGBdPfaThy_gKqybpUGTGMTEi-Wk9yOlDQedsrafz9nQX1AQpplyZbPd46le5KEIUwxruP1FDMJac7jAwfEKXCpYbp7kkwOwtNkAiBEKkTGnycvQlgDIEdZTJLqzN2xuae6KVtiy8D6n8RO_jQ1dRW9Z7ar2SdHgS179tn2PfmPbNbFbdt9aAJzqwfDhXc33m7YLAQKYUNdzy5t33Q37Mq59mXybGXbQK_-nkfJ9eLkan6Wnn89Xc5n52mVqQJSREEabLEqBUioy7IEjrkmWytOVCtJWOoqq0pRQl4h5JlWVJdYWSkrrUkcJe_G3Fvvfm8p9GbThIra1nbktsEgCFUooQU8AkUsMpC5iuibf9C12_o4gGC0hEJlnPMIvf0fhEornmeFyiNVjFTlXQieVubWNxvr9_E_M7Rp1mYozQylmaFN89Cm2UXrh9F617S0f7TPXFzPLodrDHg9BqxD7_whQKIUAqWOejrqTehpd9Ct_2XiEFRmvn85Nbj4sYBvcmGkuAdzqLsI</recordid><startdate>201201</startdate><enddate>201201</enddate><creator>Heinrich, Carolyn J.</creator><general>Blackwell Publishing Ltd</general><general>Wiley Subscription Services</general><general>American Society for Public Administration</general><scope>BSCLL</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7UB</scope><scope>8BJ</scope><scope>FQK</scope><scope>JBE</scope><scope>K9.</scope></search><sort><creationdate>201201</creationdate><title>How Credible Is the Evidence, and Does It Matter? An Analysis of the Program Assessment Rating Tool</title><author>Heinrich, Carolyn J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c5780-113e90a8fb3040dbbb02169ead72eed74e1b9c5cb3b06c106597edb1ca44c99e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Budget allocation</topic><topic>Budgeting</topic><topic>Budgets</topic><topic>Bureaucracy</topic><topic>Departments</topic><topic>Effectiveness</topic><topic>Empirical evidence</topic><topic>Evaluation</topic><topic>Evidence</topic><topic>Funding</topic><topic>Government agencies</topic><topic>Government budgets</topic><topic>Government performance</topic><topic>Government reform</topic><topic>Health care policy</topic><topic>Health Care Services Policy</topic><topic>Human Services</topic><topic>Instructional Effectiveness</topic><topic>Management</topic><topic>Methodology</topic><topic>Performance management</topic><topic>Program Evaluation</topic><topic>Public administration</topic><topic>Public budgeting</topic><topic>Qualitative research</topic><topic>Quality</topic><topic>Quality of Health Care</topic><topic>Quality standards</topic><topic>Quantitative analysis</topic><topic>Rating</topic><topic>Ratings &amp; rankings</topic><topic>Statistical significance</topic><topic>Strategic planning</topic><topic>Studies</topic><topic>U.S.A</topic><topic>United States federal budget</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Heinrich, Carolyn J.</creatorcontrib><collection>Istex</collection><collection>CrossRef</collection><collection>Worldwide Political Science Abstracts</collection><collection>International Bibliography of the Social Sciences (IBSS)</collection><collection>International Bibliography of the Social Sciences</collection><collection>International Bibliography of the Social Sciences</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><jtitle>Public administration review</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Heinrich, Carolyn J.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>How Credible Is the Evidence, and Does It Matter? An Analysis of the Program Assessment Rating Tool</atitle><jtitle>Public administration review</jtitle><date>2012-01</date><risdate>2012</risdate><volume>72</volume><issue>1</issue><spage>123</spage><epage>134</epage><pages>123-134</pages><issn>0033-3352</issn><eissn>1540-6210</eissn><coden>PBARBM</coden><abstract>This research empirically assesses the quality of evidence that agencies provided to the Office of Management and Budget in the application of the Program Assessment Rating Tool (PART), introduced in 2002 to more rigorously, systematically, and transparently assess public program effectiveness and hold agencies accountable for results by tying them to the executive budget formulation process and program funding. Evidence submitted by 95 programs administered by the U.S. Department of Health and Human Services for the PART assessment is analyzed using measures that capture the quality of evidence and methods used by programs and information on characteristics of agencies that might relate to program results and government funding decisions. The study finds that of those programs offering some evidence, most was internal and qualitative, and about half did not assess how their performance compared to other government or private programs with similar objectives. Programs were least likely to provide externally generated evidence of their performance relative to long-term and annual performance goals. Importantly, overall PART and results scores were (statistically) significantly lower for programs that failed to provide quantitative evidence and did not use long-term measures, baseline measures or targets, or independent evaluations. Although the PART program results ratings and overall PART scores had no discernible consequences for program funding over time, the PART assessments appeared to take seriously the evaluation of evidence quality, a positive step forward in recent efforts to base policy decisions on more rigorous evidence.</abstract><cop>Oxford, UK</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/j.1540-6210.2011.02490.x</doi><tpages>12</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0033-3352
ispartof Public administration review, 2012-01, Vol.72 (1), p.123-134
issn 0033-3352
1540-6210
language eng
recordid cdi_proquest_miscellaneous_1037873930
source Wiley Journals; Worldwide Political Science Abstracts; EBSCOhost Business Source Complete; JSTOR Archive Collection A-Z Listing; EBSCOhost Political Science Complete; EBSCOhost Education Source
subjects Budget allocation
Budgeting
Budgets
Bureaucracy
Departments
Effectiveness
Empirical evidence
Evaluation
Evidence
Funding
Government agencies
Government budgets
Government performance
Government reform
Health care policy
Health Care Services Policy
Human Services
Instructional Effectiveness
Management
Methodology
Performance management
Program Evaluation
Public administration
Public budgeting
Qualitative research
Quality
Quality of Health Care
Quality standards
Quantitative analysis
Rating
Ratings & rankings
Statistical significance
Strategic planning
Studies
U.S.A
United States federal budget
title How Credible Is the Evidence, and Does It Matter? An Analysis of the Program Assessment Rating Tool
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T12%3A32%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=How%20Credible%20Is%20the%20Evidence,%20and%20Does%20It%20Matter?%20An%20Analysis%20of%20the%20Program%20Assessment%20Rating%20Tool&rft.jtitle=Public%20administration%20review&rft.au=Heinrich,%20Carolyn%20J.&rft.date=2012-01&rft.volume=72&rft.issue=1&rft.spage=123&rft.epage=134&rft.pages=123-134&rft.issn=0033-3352&rft.eissn=1540-6210&rft.coden=PBARBM&rft_id=info:doi/10.1111/j.1540-6210.2011.02490.x&rft_dat=%3Cjstor_proqu%3E41433149%3C/jstor_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1797265876&rft_id=info:pmid/&rft_jstor_id=41433149&rfr_iscdi=true