'Being evidence-based is really, really hard': shifting evaluation culture

By David Donaldson

June 15, 2018

Too often evaluation is an afterthought and the report sits in a drawer. Two experts discuss how to shift attitudes to make evidence-based policymaking meaningful.

Building high quality evaluation into policymaking will require both new structures and a shift in culture, argues economist Nicholas Gruen.

“Being evidence-based is really, really hard,” the Lateral Economics CEO tells the University of Melbourne’s Policy Shop podcast in its latest episode.

“It’s very easy to say, as the prime minister says, as the secretary of the Department of Prime Minister and Cabinet say, we have to be evidence-based in our policy, everyone says that.”

There’s much more to it than dropping a randomised control trial into an unreconstructed process.

Some departments have good processes in place that embed evidence in decision making in a meaningful way, says Patricia Rogers, professor of public sector evaluation at the Australia and New Zealand School of Government.

“But far too often evaluation is something that you do at the end, it’s something that’s primarily about ticking the box and compliance, it’s about accountability in a very narrow sense and it’s often done too little too late and not really linked into the decision-making. Or conversely it’s seen as if there’s going to be one answer that will stand for all time to inform the evidence,” Rogers tells the podcast.

“Over the last 10, 20, 30 years what you see is these waves of governments, particularly incoming governments saying we want to be more evidence-based, we want to find out what works and what doesn’t, we want to stop doing what doesn’t work, we want to do more of what works and learn more.

“But somehow either the appetite for really confronting when things don’t work [fades], or people realise that it’s much harder than you might think just to find out really what works. Because fundamentally that’s the wrong question, you really need to find out what’s working for whom in what ways and what’s not working for whom in what ways and what situations.

“That’s a much harder question and not one that’s going to be answered in one study.”

Can an evaluator-general solve the problem?

Creating a new office of evaluator-general could help improve the use of evidence in policymaking, Gruen believes.

He’s previously been highly critical of the current state of evidence-based policymaking.

An evaluator-general would be an independent figure who would make it easier to work out which of those evaluations really are worthwhile. It would bring increased transparency to evaluation.

“Right now a little innovative program can be as good as it likes and no one really learns much about it. The opposition doesn’t know much about it, they don’t know or they might know that it’s kind of quite good but they don’t know whether it’s better than the existing system.

“So transparency is a critical part of this, independence, transparency and collaboration so that when we have learning, when we have a system which the evaluator-general can put their hand on their heart and say we’re pretty sure this works better than this other system, pressure comes on the system to do something with that information.”

It’s about rejigging the accountability system, which at present relies on people who often know very little about the details of policy and its evaluation — politicians — being able to hold ministers accountable.

“What we’ve got at the moment with governments making public statements about how they’re holding everybody to account in ways that they can’t possibly hold to account, we have a chain of legitimation, a chain of governance if you like, which is dodgy,” he argues.

“In my model, the evaluator-general ultimately is responsible for the monitoring evaluation system of an agency. If the agency says we’re protecting children, then the evaluator-general will provide some resources to help build a monitoring and evaluation system to help those people deliver that service and to be accountable to themselves to start with.

“We have to have self-accountability, which is the best sort of accountability, to drive all the other kinds of accountability. So the idea is that they collaborate closely, that they provide expertise, which is very thin on the ground in this sort of area, and if things work well that close collaboration turns into a seamless chain of accountability all the way to the top.

“When there is a disagreement between the agency and the evaluator-general about evaluation, what the evaluator-general says goes.”

He hopes such a system would help build a whole new culture in how government approaches evidence, and believes this requires significant investment in building up the skills of public servants.

“I would call a success not just producing a report and having some of those recommendations accepted but starting a whole culture of having people who have some expertise in evaluation, helping programs become evidence-based.

“This is kind of how Toyota revolutionised production on the line, they gave their workers literally — this isn’t a figurative statement — 10 times as much training. They trained them in statistical control and they, instead of imposing KPIs on them and running the line faster and faster and trying to get them to keep up, they basically managed to harness the intrinsic motivation of these teams of workers on the line to do … as good an evidence-based job as they could.

“Within about 10 or 15 years they had doubled labour productivity, not by spending more capital — well actually, more human capital, if you like. So that’s an evidence-based culture and the difference is massive.”

About the author
Inline Feedbacks
View all comments

The essential resource for effective
public sector professionals