The new head of the Department of Premier and Cabinet in New South Wales, Blair Comley, has both big challenges and some terrific largely underutilised resources in his central agencies. Running contrary to a lot of misapprehensions about bean-counters, the most interesting work to improve evidence-based public policy is being done in Treasury.
This week I was pleased to be asked to speak at a Radio National Big Ideas event at Swinburne University on the subject of inequality. A recurring theme in comments from fellow panellists was that government needs to adopt more evidence-based policy to solve society’s ills. One panellist expressed frustration that as an academic she can’t understand how resistant government seems to be to evidence-based methods.
What is not well understood by many intelligent and well intentioned people such as my co-panellists is how little evidence government often has at its disposal. When it comes to publicly funded solutions to social problems from homelessness to unemployment to illiteracy, government has not often required experimental conditions that test hypotheses, gather data and reflect on results to inform future spending. When they do require it, this work can be the first item to go during “efficiency” drives because research isn’t considered frontline service.
The Commonwealth Finance Department’s 2010 Strategic Review of Indigenous Expenditure found:
“The strong commitments to evidence-based policy made by the Prime Minister and other Heads of Government are not matched by the quality of the evidence currently available.”
The report identified many problems which are not unique to the indigenous policy sector, including a lack of evaluation, poor methodology and a lack of rigorous research. Without better evidence, money cannot do its job. Without better evidence all the “political will” in the world will not make a jot of difference.
Program funding allocations must be reformed in a way that will require an improvement of the evidence base of public policy.
The NSW government has taken some important actions to improve this situation. A Centre for Program Evaluation has been established in Treasury to “conduct methodologically rigorous evaluations of large and significant NSW government programs”. This is being accompanied by the creation of a NSW Government Evaluation Community of Practice to create a culture of understanding across the sector that real measurement and evaluation is possible.“Contrary to the insistence of many departments that what they do can’t be “bean-counted”, great strides have taken place in program evaluation since the 1990s.”
Contrary to the insistence of many departments that what they do can’t be “bean-counted”, great strides have taken place in program evaluation since the 1990s. In the United States this has been lead by the Washington State Institute for Public Policy, which was formed to identify and compare the performance of evidence-based policies. The institute’s goal is to provide policymakers and treasuries with a list of well-researched public policies that can, with a high degree of certainty, be replicated and scaled with predictable rates of success.
This ensues both better outcomes for the needy and a more efficient use of taxpayer dollars.
WSIPP systematically assess options with a strong evidence base weighted to randomised control trials and outcome-based assessments rather than lower quality research like self reports and case studies. They determine a unit cost for replication and compare the benefits to other treatments for the same problem. For instance, competing treatments to keep families from losing their children into state care might include multi-systemic therapy, “triple P” or nurse home visits. Each option has different methodology, cost and success rate at preventing family breakdown and future state paid interventions.
This sort of bean-counting is exactly what’s needed in far greater quantity if we are to amass the evidence that people assume exists when they demand the implementation of evidence-based policy.
The Centre for Program Evaluation is in its early days. But with luck and some political and professional support, it has great work ahead.