Competition as a means to an end: human services supply chain needs a brain

By Nicholas Gruen & Chris Vanstone

Friday August 19, 2016

In part one, Nicholas Gruen and Chris Vanstone explored how it’s government’s reputation on the line as human services are increasingly delivered by a corporate supply chain, drawing on lessons from Toyota.

A supply chain needs a brain

About two years after the fall of the Berlin Wall, a delegation of Russian bureaucrats visited London and one famously asked Paul Seabright, a leading economist who met with them, “who is in charge of the supply of bread to the population of London?” Counterintuitive though it may be, in a market, no one’s in charge. That’s not true of a corporate supply chain. It will make use of markets in myriad ways, but, like a body, it will have a nervous system providing it continual feedback about the performance of its various parts, and a brain capable of aggregating that information and strategising improvement.

This is what we were getting at in the Australian Centre for Social Innovation’submission to the Productivity Commission’s Human Services inquiry:

“A ‘brain’ works out what creates outcomes and what doesn’t. A ‘brain’ would know how the system is currently performing and how to improve and grow services when they are working. . . . In many of the sectors in which we work, though the initial system rarely performed as we would like, increased outsourcing has, in fact, reduced expertise as to what works. The risk is that government has become an expert in contract management, and service providers have become experts in ‘contract delivery’. ‘Brains’ have atrophied on both sides. We’d like to see the Commission reverse this trend and enable both to be experts in ‘outcome creation for vulnerable groups’. We work with some of the most progressive government departments and service providers and they often struggle to keep their ‘brains’ intact because of the larger forces operating around them.”

Indeed, in exploring one area of human services that had produced disappointing results within a large department run by senior managers who were dedicated and highly regarded (including by us), we nevertheless found that one operational unit had begun delivering considerably improved performance. But this went undetected amongst senior management and the unit was disbanded with changing departmental priorities.

A brain needs a nervous system: managers must be kept honest

Even if government administrations learn the lessons argued for above, the greatest difficulty remains as it ever was. For profit seeking firms in a market, quality, cost, profitability and economic value are relatively tangible. Even where they are quite imperfect, markets offer considerable intelligence (transparency) about the economic value of specific goods or services and comparative costs between suppliers, all of which helps keep management to the honest toil of satisfying customers at minimum cost.

Human services are much more difficult – conceptually and practically. Firstly, in the human world, interactions are more complex and uniquely situated within a context than is the case in the market provision of most products. So it’s much harder to know what works, how and why it works, how it affects other parts of the system, the difference between long- and short-term impacts and so on. Secondly, because of their public good and benevolent quality, many human services aren’t funded by paying consumers. This robs the resulting systems of all the transparency and feedback around cost, price and quality that emerges from the organic tension between buyers and sellers in a market.

In lieu of these market signals and disciplines, system managers typically specify performance measures. But designing good performance measures – measures that will have some diagnostic power for those operating the program – will often require intimate knowledge of the workings of the program itself. For instance, an appealing measure of the performance of a job placement program would be the number of job seekers continuing in new jobs after job-placement services. For a child protection program, it might be the number of children removed from struggling families following early intervention. But are these the right measures? Could too rapid job matching destroy value by foreclosing better matches, or by diverting valuable system resources to where they are redundant? And how does one weigh up the relative merits of child removal with poor home care? It is these things that monitoring and evaluation should be shining a light on, enabling the system to better understand and improve its own impact.

In all this, performance measures imposed from the top sound like a mistake waiting to happen. Bureaucracies have a terrible habit of role-playing their expertise, while in reality going through the motions and covering their arses. And this can occur whether service is delivered by lower levels of the hierarchy or by contractors. Yet our experience in TACSI tells us that progress occurs when we draw those we are trying to help into the process, when we’re intentional about the change we’re trying to facilitate and about the process of learning. That means articulating our theory of change and then testing each assumption we’re making about how change occurs. The rigour of this process, against which we’re constantly testing our practice, is ‘scientific’, seeking to test itself against reality at every turn. Competition within functioning markets has the same quality of relentlessly disciplining those within firms against a hard reality from without – from the market.

The real question confronting the PC’s inquiry into Human Services is how one might generalise TACSI’s ideas for program design and management to the system level. We’d propose these building blocks:

  1. Monitoring and evaluation (knowing what you’re doing) should be at the heart of program delivery – whether delivery is by government or external providers.
  2. It needs to be designed and administered at the level of those delivering services on the ground.
  3. However, the ultimate responsibility for the monitoring and evaluation system and the data within it should lie, not with the portfolio minister to whom the delivery agency reports, but with an independent agency that reports directly to Parliament like the Auditor General or the Ombudsman.

Monitoring and evaluation – the ‘nervous system’ of the production chain would thus be:

  • Co-designed by those at the coalface of delivery and by experts in monitoring and evaluation;
  • Independent of the institutional imperatives of the delivering agency;
  • Transparent to all to maximise the scope for the wider system to learn, and to deliver improved bureaucratic and political transparency of performance.

Framing choice

Finally, it is worth noting the way choice is typically framed. It’s certainly worth asking whether allowing firms or NGOs to compete to supply human services might work better than delivery by government hierarchies. But questions about choice should also provoke deeper reflection. Consider how case-managers are chosen for clients in child protection agencies, or teachers for students or nurses for hospital patients. Allowing patients to choose their nurses could substantially disrupt efficient routines, but giving students greater choice in their teachers or clients greater choice of case-workers could prompt all manner of system improvements. This isn’t to say that choice is self-evidently the best criterion by which such matches should be made. These are complex cases and the clients of these services may not be particularly wise in pursuing their enlightened self-interest. However, exploring ways to reframe choice and specifically who chooses what (or whom) within human service agencies seems like a promising and overlooked consideration, whether or not we arrange the system to look like a market by having multiple suppliers compete for business.

In fact, in a TACSI-built peer-to-peer mentoring program for families going through tough times – Family by Family – the family that identifies itself as needing help chooses its mentor family from a selection of profiles as someone might seek a date on an online website. But crucially, we didn’t design this part of the program from some ideological judgement that choice is better, or even that choice would ’empower’ families seeking help – though we did want to empower them. It just seemed natural to do this as we co-designed the program with the families around their perspectives and needs, not the needs of all the higher status participants in the process – the mentor family, the family coach and other professionals running the program.

As long ago as 1907, John Dewey was ruminating on what might today go under the slogan of ‘citizen centric’ services. Thinking of schools, he wrote:

The change which is coming into our education is the shifting of the center of gravity. It is a change, a revolution, not unlike that introduced by Copernicus when the astronomical center shifted from the earth to the sun. In this case the child becomes the sun about which the appliances of education revolve; he is the center about which they are organized.

In many ways, schools have been transformed just as Dewey hoped – using his own words to drive the change. And yet, compared with an imaginable alternative embodying the spirit and not just the letter of his words, he’d be surprised at how little progress we’ve made at building services around those they’re intended to benefit – in schools, in hospitals, in human services more generally.

We should be wary of a repeat in which, to use Gary Sturgess’s words, in the pursuit of reform we “simply replace one kind of institutional monoculture with another”.

About the author
3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Geoff Edwards
Geoff Edwards
4 years ago

Thanks, Nicholas Gruen for this Part 2 of your essay (and for your feedback attached to Part 1).

It is not difficult to find much within your two posts to endorse, but I can’t quite see a coherent conceptual framework. For example, I agree that we need to loosen the grip of self-interested professionals on system design and heed more the experience of clients at the coalface; but on the other hand, the essay argues that we need to engage professional experts (a different class, monitors and evaluators, presumably including economists) to build feedback into the system.

To take another example: we need an intelligent brain at the centre in order to coordinate the complex supply chains; but the essay argues that we should remove responsibility for performance evaluation from the public service centre and assign it to the Auditor-General or similar. In other words, we want to impose a cadre of checkers with a rationalist approach (and most likely a content-free generalist approach) to look over the shoulders of the portfolio staff who are supposed to hold deep technical expertise. I can’t reconcile the two exhortations. This mechanistic process by itself wouldn’t add intelligence to the centre as the checkers would lie outside the team. The caseworkers then adjust their practice to satisfy the checkers rather than the clients.

Let me give a case study to illustrate my point. In 2008, the federal Auditor-General published a report evaluating the federal natural resource management (NRM) program. The report concluded that the program had not been able to demonstrate its success. True. Not surprising, since it was endeavouring to improve management practices across cycles of drought, fire, flood and irruptions of kangaroos that take decades to play themselves out. Also, it was endeavouring to build the capacity of the rural community to manage these challenges – a human services task that amounts to building infrastructure, requiring years and years to build trust. The end result was catastrophic for the sector: the Commonwealth cut funding, increased the uncertainty of funding, moved further towards short-term project funding and piled on monitoring obligations.

The fundamental message that I can extract from your essay is one I fully endorse. It is that any solution to a complex task like human services or NRM requires multidisciplinary insight. The understandings of a field held collectively by clients, street level bureaucrats, middle managers, theoreticians from numerous disciplines and yes, rationalist evaluators – etc – need to be integrated through a transparent process of concurrent facilitation and empowerment. This is why bodies like the Productivity Commission are leading government into the wilderness. Although it may be consulting widely with experts and practitioners, the insights are assessed sequentially not concurrently. At its core it still seems to stick to a one-dimensional rationalist model of society and seeks to impose that model on every wicked problem in sight. For example, its 2014 report on public infrastructure scarcely even mentioned climate change and peak oil, yet it purports to present a model for transport planning. Some subjects are simply beyond its worldview.

Regardless of its worldview, and paying respect to your own distinguished history with the PC, I think the Commission’s enquiry procedure is not conducive to the empowerment that you are seeking. It follows a sequential process: Terms of reference / internal research / consultation / draft report / final report, with various forms of limitation built-in at each stage. To build the insights of all participants in these complex multilateral problems, a concurrent process is required where learning and insight sharing can be iterative and mutually encouraging.

Geoff Edwards
Geoff Edwards
4 years ago

Thanks, Nicholas.Mostly, fine. I do think however that the modern enthusiasm for monitoring and evaluation is overdone. It is a feature of administration by contract rather than by trust. Given that your model of delivery of community services is built on intensive dialogue with frontline clients and street level bureaucrats, then the model should be fairly self-correcting without a cohort of clipboards armed with “blue forms” checking up to discipline the operatives if they fall out of line. (The “Yes Minister” episode on this comes to mind).

Anyway putting that aside, I was really intrigued by your observations on the Productivity Commission. Given your familiarity with this advocate for competition, discoordination and fragmentation, there would hardly be anyone around in a better position to write a scholarly, sober and fair-minded critique of the organisation and whether it has any useful future role. If you have already previously written such an essay, I would be pleased to know the citation.

Nicholas Gruen
Nicholas Gruen
4 years ago
Reply to  Geoff Edwards

Thanks Geoff,

You first paragraph indicates that you won’t think about the possibility of evaluation working the way I’ve suggested it can. Of course if you’re right that the institutional imperatives will drive the thing toward ‘blue forms’ then the point I’m making is moot – academic. In that case, because I’m less pessimistic than you on that score I’m happy to debate it with you, but only in terms of what I’ve suggested, not some caricature of it.

Further you appear to have a fairly binary idea in your mind in which “intensive dialogue with frontline clients and street level bureaucrats” is one pole – which is good and those at the top of the hierarchy are not like that and they’re bad. The idea in my head is that each need the other and that they’re not connecting properly at the moment and that this is a huge structural, cultural issue. I tried to sum some of this up in this post There are plenty of operations in which there’s “intensive dialogue with frontline clients and street level bureaucrats” that nevertheless don’t shackle themselves to the discipline of trying to figure out what works and why. They’re just delivering the service as best they can.
On the PC, you’ll find plenty of critiques of mine of the PC if you search the group blog I contribute to – ClubTroppo. In case you’re interested, here’s the draft of a history of automotive industry policy forthcoming in an academic journal soon. However a critique of an organisation isn’t necessarily an endorsement of others. I despair at the inability of the Commission to really apply its mind to really understand and integrate perspectives on the world that are not its own, but then I don’t see much of that about anywhere. As for the PC’s future role, the bad news is that, for all its flaws, it’s probably one of our better public sector institutions. And it has been known to produce some great reports.

The essential resource for effective public sector leaders