Evaluation is a scary word that conjures up images of scathing past audits for a lot of public servants. But it wasn’t always that way, according to Department of Infrastructure and Regional Development secretary Mike Mrdak.
Speaking to the Canberra Evaluation Forum yesterday about his efforts over the past five years trying to build a new culture of “evaluative thinking”, Mrdak said the difference between internal evaluation and external auditing had become lost on some sections of the department. Evaluation had been in decline right across the Australian Public Service since around the dawn of the millennium.
“Up until the turn of the last century … we had a very strong, centralised focus on evaluation through the Department of Finance, and the decision to effectively devolve responsibility for that and to no longer provide resourcing for that function … has become a great risk and cost to the Australian Public Service,” Mrdak told the CEF, a group formed by Finance in 1990 that spun off as a not-for-profit and was absorbed into the Institute for Public Administration Australia’s ACT Branch in 2013.
In the secretary’s view, one result of decentralising responsibility for evaluation was a loss of “enthusiasm and motivation” across the service. “And I think as a result, evaluation fell quickly across the APS among competing priorities, and with it went … much of the cultural thinking and the skills to be able to conduct, plan and manage evaluations.” It was in this context that Mrdak, shortly after becoming departmental head in 2009, set out to put evaluation front and centre in the minds of his staff.
“By evaluative thinking, what I really mean is we need to regularly question and test our assumptions; listen to our stakeholders; fundamentally know what we’re trying to achieve and how we’re performing against that objective; and we have to have the evidence and rationale to support good policy, program and regulatory design options,” he explained.
His key point was evaluation must be part of any new program right from the initial planning phase and, importantly, that means working out where the data to be evaluated will come from. When Mrdak first took the lead, he worried the department often had “a set-and-forget approach” to its regulatory activities and program delivery and didn’t spend enough time on initial design work.
All too often, he said, those managing the programs conflated evaluation with external auditing and saw it as a threat. One key defining feature of evaluation, he said, is that the teams being evaluated learn more in the process: “If the teams aren’t doing it, they’re not actually learning. Evaluation by external parties is of no real value to the areas, in my view.”“… our evaluation processes in this town, and our audit processes often leave one wondering, because I don’t think we’re achieving a good outcome …”
The bottom line is there wasn’t enough evaluation going on, which meant less alternative options for program design and delivery were being examined and offered to government, leaving the department implementing some ideas that were “less than ideal”.
The secretary still believes the department did a generally good job of its responsibilities to ensure the safety of boats, cars and aeroplanes, manage infrastructure projects large and small, and administer programs to far-flung territories, which now include Norfolk Island. The issue he described was service-wide.
“In my view, where [the APS has] reached in many audit processes these days, we’ve lost a lot of common sense. And our evaluation processes in this town, and our audit processes often leave one wondering, because I don’t think we’re achieving a good outcome with many of the processes we do across the APS.”
An evaluation renaissance
To kick-start DIRD’s evaluation renaissance, Mrdak initiated a stocktake of the past five years of audit and evaluation activities in 2010-11. In 2012 he launched a new four-point evaluation strategy, focused on leadership, capability building, providing adequate management support and coordinating governance mechanisms.
“And one of the key principles we set right at the start which underpinned our approach is that our staff would learn by doing,” he explained. “This wasn’t going to be imposed upon them, evaluations would be strategically prioritised across the organisation and importantly, the findings of evaluations would be shared across the organisation to maximise their value. This was one of the key areas we had to work through with our people, because as soon as we said our evaluation reports were going to be made available, people started to think: ‘Am I going to be judged on this?'”
Workshops on “policy logic” gave staff a chance to openly discuss their program’s purpose and helped break down the perception that this was an uncomfortable examination being imposed from above. The cultural change is still a work in progress, but the key message always has been “this is not an intimidating process”, according to Mrdak.
“We want to embed evaluative thinking in our day-to-day management approach with regular reflection, probing and improvement, because it’s much more difficult to argue against continuous improvement [than] to argue against a one-off evaluation or audit process,” he said. The buy-in of senior executives was critical, as with any major organisation-wide change process. Branch managers and above are often the “blocker group” and so were most likely to react defensively, in Mrdak’s view. He needed champions of evaluation, not sceptics.“We’ve actually got to build a culture where we actually provide the right incentives for people to have an open and frank discussion …”
By 2012-13, a “breakthrough” came when line areas started to nominate themselves for evaluation. “We still have to drag some,” he admitted, “but it’s an encouraging sign when you get people putting their hands up.” Of course, it’s not just public servants who resist. According to Mrdak, the incentive structure for both public servants and politicians is “often such that people don’t always want to know how they’re travelling, or how the initiatives we’re managing and delivering are actually going”.
“That’s a key thing,” he said. “We’ve actually got to build a culture where we actually provide the right incentives for people to have an open and frank discussion, including with ministers and governments about the performance of their programs.”
It’s not easy to insist on an evaluation plan when you’ve been handed a lot of “very small, often time-limited programs” by way of election commitments and asked to build them quickly, the secretary acknowledged. It’s difficult to ask for enough time to do the proper program planning, but it’s an important way of standing up for good policy, rather than good-on-paper policy.
“The fact is, it’s an investment well made,” Mrdak told the small audience of CEF members, “and I think the fact is we’ve got to develop more viable measurements in terms of actual objective delivery. It’s one of the challenges, not just across the APS, but across all governments in Australia.”
He said the next step was to encourage evaluation to permeate throughout the culture of government itself.
“It is actually our responsibility to educate officers, [and] just as importantly the government, about the importance of evaluation and planning for the measurement of performance,” he said. “And they’re pretty tough conversations with ministers at times, when you actually have to explain that gee, this program — well intentioned — may not actually be delivering on the ground what you thought it was.
“But that’s our responsibility. Just as public policy’s an opportunity, it’s also a responsibility to make sure the messages are clear and we’re delivering that message back to government about how effective our programs and our regulatory activities are.”