In an era of real-time data and customer feedback, is performance auditing broken? If they continue to reward process and ignore outcomes, ministers may turn elsewhere, argues the NSW Finance, Services & Innovation secretary.
This article is adapted from Martin Hoffman’s address to Impact 2018: International Meeting of Performance Auditing Critical Thinkers, organised by the Audit Office of NSW, on March 19.
Unusual opening — a bible story — Matthew Chapter 25:14-20. Parable of the talents. [Due respect to those of non-Christian heritage.]
Three stewards entrusted with resources by the master (aka ‘the minister’) while he is away.
Two had invested and profited — they were rewarded.
One buried the money, safe-guarded it, took no risk, ventured nothing, returned the principal and nothing more — and was castigated and cast out to the gutter.
So, pretty clear message there for us stewards, us public servants; we are encouraged to have a go; not good enough to merely safeguard the resources with which we are entrusted by the government, to preserve the status quo.
Of course what is missing and unclear — and I remember this even as child in Sister Evangelina’s Year 2 classroom — what would have been the master’s response to the steward who invested and did not profit, who invested in good faith but failed … and lost some or all of the principal … we didn’t get that example in parable.
Failure and the magnificent certitude of hindsight
Of course that is one of the challenges for innovation – what if it doesn’t work?
And what will the Audit Office say about it, writing with the magnificent certitude of hindsight and self-defined deadlines?
Or just as importantly, what does the public service think the Audit Office will say about it?
Well we can all comfort ourselves by repeating what we all know it should say about that situation:
- Were the outcomes or goals defined in advance?
- Was there good process and governance, etc?
- Was there good risk management; in the sense of a clear-eyed and insightful attempt to anticipate risks and mitigate to the extent possible?
- Was there proportionate balancing of investment and likely return vis-à-vis risk?
- Were problems called early and honestly, or hidden? Was the stop decision made when it should have been?
- Were learnings genuinely recorded, shared, etc for future efforts?
And if we can say yes to all of that, well then a performance audit should have no problem with an innovative attempt at something that failed.
The challenge to practitioners is to ask: is that always the way it happens in practice?
Worry about a couple of things:
1. An overly strong focus on process and practice compliance as a proxy for outcomes
It’s easier to follow AO’s own better practice guides – useful, but then compliance with them becomes the focus of the audit itself.
I’ve seen it myself. Performance audits that note critically that not all best practice probity guidelines were followed, or not just documented so that it could be shown that they were followed, in a grants program. Fair cop.
But then to ask, is there any evidence that the grant recipients were not worthy or that their selection was tainted in any way. In other words: was there a failure in probity outcome rather than probity process?
Too often in my view, the audit report won’t go there. Won’t venture an opinion either way. Won’t say that notwithstanding the lack of probity process documentation, there is no suggestion the grant recipients are not worthy.
To borrow from financial auditing. Performance auditing can tend towards the American model of “true and fair” being defined as equating to strict compliance with standards; versus the Australian model of “true and fair” being an overall judgement informed but not determined by standards. (This is an overly simplified characterisation of financial accounting standards I know but you get the point.)
A further problem here is the shift in processes used by a lot of innovation — the formal defined processes of agile and lean. Very different documentation and decision-making to the standard waterfall approaches embedded in many better practice guides. Kanban walls, scrums and stand ups. Many fewer formal artefacts are able to be reviewed after-the-event.
Given innovation by definition is about doing something differently to the way it has been done before; performance auditing is a foe if it doesn’t focus relentlessly on results and outcomes, however achieved.
2. Speed or timeliness
Is performance auditing, as usually or currently practiced, able to keep up with the pace of modern society and government?
We all know government moves often at one of two speeds: glacial bureaucracy, or frantic full-steam ahead. And it’s sometimes in the latter that the innovation, for good or bad, is happening.
I’d have to say performance auditing is pretty steeped in its slow, big, formal, hind-sight reports. Not much use often to the program being reviewed — and of course questionable regarding the ability of the public sector to learn from them in other applications. (A whole other topic.)
Take an analogy from the field of strategic planning: new concepts of strategic agility and responsiveness, of creating optionality; rather than the big heavy machinery of annual strategic planning and budgeting; as if we really can predict the future with any value 3-5 years out. To me performance auditing now seems very much in keeping with the latter than the former.
And a developing culture of performance management based around increasing amounts of real time data. Literally dashboards on my phone that tell me time how many Active Kids $100 vouchers were claimed by parents and redeemed by sports clubs – literally in real time. The count changes as you look at the chart. Comments and complaints in real time flowing through.
How does performance auditing work in that environment? Some value of course in stepping back from the fray, from the hurly burly of the constant flow, and casting a coolly detached eye over it all. Of course there is. But is that enough?
One might say, where is the innovation in performance auditing including as to speed?
Now you might say well it’s all fine – performance auditing done well can cope with all that. And your agencies all do it really well of course I know!
But I’d challenge you with the evidence that innovation is coming in this space and you are being relegated to the margins – precisely because of the problems of process and speed, amongst others.
The evaluation movement
In the UK, they’ve created What Works Institutes:
The network is made up of 7 independent What Works Centres and 2 affiliate members. Together these centres cover policy areas which receive public spending of more than £200 billion. What Works Centres are different from standard research centres. They enable policy makers, commissioners and practitioners to make decisions based upon strong evidence of what works and to provide cost-efficient, useful services.
The centres help to ensure that thorough, high quality, independently assessed evidence shapes decision-making at every level, by:
- collating existing evidence on how effective policy programmes and practices are
- producing high quality synthesis reports and systematic reviews in areas where they do not currently exist
- assessing how effective policies and practices are against an agreed set of outcomes
- sharing findings in an accessible way
- encouraging practitioners, commissioners and policymakers to use these findings to inform their decisions
The current What Works Centres are:
|What Works Centre||Policy area|
|National Institute for Health and Care Excellence (NICE)||Health and social care|
|Sutton Trust/Educational Endowment Foundation||Educational achievement|
|College of Policing What Works Centre for Crime Reduction||Crime reduction|
|Early Intervention Foundation||Early intervention|
|What Works Centre for Local Economic Growth (hosted by LSE, Arup, Centre for Cities)||Local economic growth|
|Centre for Ageing Better||Improved quality of life for older people|
|What Works Centre for Wellbeing||Wellbeing|
|Affiliate: Public Policy Institute for Wales|
|Affiliate: What Works Scotland|
To me – that sounds all an awful lot like what you all would hope performance auditing was doing and covering.
In the US, another example: Washington State Institute for Public Policy.
WSIPP helps the Washington state legislature and other Washington state policy makers make evidence-based policy decisions by conducting public policy research and carrying out cost-benefit analyses of the state’s programs and policies.
Also – Nick Gruen, one of Australia’s most innovative public policy thinkers, has proposed the establishment of a what he calls the Office of the Evaluator-General.
“So here’s what I propose…..Evaluation would be done by people with domain skills in both evaluation and in the service delivery area who were formally officers of the Office of the Evaluator-General. They would report to both the portfolio agency delivering the program and to the evaluator-general with the EG being the senior partner in the event of irreconcilable disagreement. All data and information gathered would travel to the centre of both the EG’s and the departmental systems. Meanwhile, the portfolio agency would report to their minister but the EG would report to Parliament — as the auditor-general does.
“The monitoring and evaluation system would be built from the start to maximise the extent to which its outputs can be made public and the public could be given access to query the system, though the system itself would only provide public information outputs that met strict privacy safeguards.
“One could make the EG an office within the auditor-general’s function. After all program evaluation is already one of the auditor’s functions. This might be worth trying in the short term, but I doubt it’s a good idea even to that extent. Despite auditor-generals’ fondest endeavours their involvement is often seen as inimical to innovation. By contrast a central purpose of the EG would be to grow the intelligence for the system to successfully innovate.”
And in NSW certainly ministerial attention is towards these sort of institutions rather than making more use of the AO’s performance auditing. Two ministers visited the WIPP in January, there is a range of engagement with UK’s WWI’s and so on.
More performance partner than performance auditor
I think the profession needs to ponder why these alternate actual and proposed institutions given the real powers, independent access and capability that the AO already possesses.
Why to use Gruen’s words is the AO seen as “inimical to innovation”?
I’ve canvassed some thoughts already. And I don’t want to make too much of the trite whinges — about being overly concerned with reports (or report headlines) that gain tabloid media coverage; and the performance audits that come to resemble fishing expeditions for the gotcha moment — other than to say that perhaps a cliché is usually a cliché because of an original kernel of truth somewhere, sometime. Fair or not.
One further observation would be to think about the extent to which the very concept of independence is hurting or helpful. It is quite rightly held extraordinarily dear – it cannot be compromised in the sense of being constrained as to what you can and cannot look at, what you can and cannot say. But I just wonder sometimes if it is held in a way that removes any sense of accountability for actual improvement in performance, in outcomes. Just too distant. And so agencies and ministers go looking for somebody that is going to be more of a partner, more the critical friend, someone more invested in the outcome themselves. And of course you are invested in the outcome I know — you do your jobs because you want better results in government. I do wonder though if there is a way to be more the performance partner than the performance auditor, without compromising independence. Time doesn’t permit developing this idea further — perhaps one for later.
But let me conclude on a more positive and hopefully helpful note — perhaps two small ways to address the challenge of innovation in performance auditing, such that it is supportive, and not inimical, to wider public sector innovation.
Firstly. I’d love to see horizontal cross-jurisdictional studies. Within Australia and internationally. All our governments are doing very much the same things. And yet as far as I know, there is limited if any joint performance audits across jurisdictions. A performance audit across states that looked for example at how our respective agencies responsible for regulating say retirement villages were performing would be very useful. Why couldn’t you join up and do this at the same time, same data, etc. We way under-utilise the laboratory of competitive and comparative federalism. Innovation is absolutely not just about coming up with the best new idea all by yourself; copying and learning from other environments is a very valuable source of innovation.
Secondly. Vertical or longitudinal or ongoing studies. Rather than a one-off review, why not an embedded process of reviewing changes in performance over time; based on that performance data I spoke of earlier; offering more in-the-moment advice based on comparative best practice and evidence (what works) rather than only hindsight accountability or criticism. How are homelessness programs performing year on year for example – and a chance for AO recommendations to have an impact before the next measuring stick.
I can’t believe I’ve just proposed to have the AO permanently in my agencies – but well there you go!
We all know improving government performance is shall we say a target rich environment! There is so much we can and should be doing better. It motivates me every day. I know it does you too. I hope my comments, offered in that collegial spirit, have been of some interest and assistance.
Read more at The Mandarin: Why we accept travesties of ‘evidence-based’ policymaking.