Last week, the federal ALP committed to implementing a Treasury-based evaluator-general if it gained office at the next election. The APS Review has also been considering what role evaluation of public programs will have in its findings. Below is a summary from Australia’s leading evaluator-general proponent, Dr Nicholas Gruen, of the design and implementation issues that will need to be addressed.
The independent and the politically directed arms of the executive
Public sector services can be provided by either the ‘politically directed executive’ under the supervision of a political office-holder – in Australia’s system a minister – or the ‘independent executive’ which typically reports to the legislature independently of political direction.
Agencies are often situated within the independent executive, where they provide information or support the integrity of information and conduct more generally. Thus the auditor-general defends the integrity of government finance and wider systems. Others, such as the surveyor-general, bureaus of statistics or meteorology or the recently established Parliamentary Budget Office (PBO) form part of our informational infrastructure.
The establishment and expansion of Australia’s PBO alongside the Federal Treasury illustrates the principles at work. It was established to provide parliamentarians with government-funded fiscal policy expertise previously available only at the direction of the government through Treasury.
The current Opposition’s proposal to move responsibility for the Government’s economic forecasting from the Treasury to the PBO illustrates some principles by which functions should be allocated between the ‘ministerially directed’ and the ‘independent’ executive. As ‘spin’ engulfs political debate, and bureaucrats are increasingly drawn into assisting their political masters ‘perform’ government1, it seems sensible that this work be insulated from political direction or undue influence and for it to be seen to be so.
I want to take these principles further in pursuit of evidence-based learning in policy and delivery. Just as, under the Opposition’s policies, Treasury’s advice to its Treasurer would be based on independent expertise about the actual facts and most plausible futures supplied to it by the independent PBO, similar principles apply more broadly to monitoring and evaluating the delivery of government funded services.
I’ve proposed an evaluator-general, an independent statutory agency having investigative and reporting powers similar to the auditor-general, though in the area of monitoring and evaluation rather than audit. Its existence would promote the profession of evaluation which, unfortunately enjoys far lower professional status and visibility than economics, accounting, audit or even public policy.
However, its role goes well beyond sitting atop government monitoring and evaluation systems. This proposal envisaged as the institution through which a new demarcation would be operationalised between program delivery on the one hand and resourcing expert knowledge on program performance on the other. Thus a line agency might deliver a program – or commission third parties to deliver it – but the evaluator-general would direct and provide substantial resources to the monitoring and evaluation system constituting the program’s ‘nervous system’.
Thus, monitoring and evaluation would be designed and operated in the field by officers of the evaluator-general. For this to work well, they and the delivery agency would need to collaborate closely. However, the evaluator-general would have ultimate responsibility for monitoring and evaluation in the event of disagreement.
The evaluator-general would ensure that monitoring and evaluation outputs were available first and foremost to service deliverers to assist them optimise their performance. But subject to privacy safeguards, the evaluator-general would also make public the monitoring and evaluation system’s outputs together with appropriate comment and analysis.
The objectives of the new arrangements
The finely disaggregated transparency of performance information made possible by this arrangement would support;
- the intrinsic motivation of most of those in the field to optimise their impact by building their own ‘self-transparency’ to an impartial spectator and ‘expert critical friend’;
- public transparency to hold practitioners and delivery agencies to account;
- more expert and disinterested estimates of the long‑run impact of programs to enable a long‑run ‘investment approach’ to services; and
- a rich ‘knowledge commons’ in human services and local solutions that could tackle the ‘siloing’ of information and effort within agencies.
Further, by publicly identifying success as it emerged, an evaluator-general would place countervailing pressure on agencies to more fully embrace evidence-based improvements even where this disturbed the web of acquired habits and vested interests that tend to entrench incumbency. Thus the tendency Peter Shergold laments for “too much innovation [to] remain at the margin”, might be ameliorated2.
With journalism and political debate increasingly given over to spin, the public sector can strengthen its own independence from this process and help fill the gap left by the retreat of public interest journalism by strengthening the expertise, resourcing, independence and transparency of the evidence base on which it proceeds.
As has been highlighted since at least the Moran Review, “Ahead of the Game” in 2010, governments need to increase substantially their investment in monitoring and evaluation. The establishment of the evaluator-general would be a good occasion on which to commit to this and would provide the appropriate institutional environment in which it should take place. Even without this deficit, it would take time to fully implement the vision set out here.
Accordingly, a substantial period should be set to move towards the vision. Five years would be a reasonable time. Priorities should be set to gain early experience of crucial aspects of the complete model being proposed. Thus, in high priority areas, experience should be gained in monitoring and evaluation of new and innovative programs and also with comparing their efficacy with incumbent systems. Capacity should be expanded to ensure the commitment to evidence-based policy is realised throughout the public service over a period of five years.
This article is based on the submission from Dr Gruen to the APS review.