Making better decisions: tackling the biases we all hold


Should government projects conduct pre-mortems to avoid optimism bias? Are you ignoring that evidence that conflicts with your own beliefs? A new guide aims to help governments tackle their own biases.

There’s an American saying — “Congress does two things well: nothing and overreacting”.

Issues that make for good television or play on dominant political narratives frequently receive attention over slow-moving and complex problems, regardless of whether they are more important.

“A study of Danish politicians found they were good at interpreting data about ‘school A’ and ‘school B’ but bad when the schools were labelled ‘public’ and ‘private’.”

And it’s not just a problem in the United States. Government decision makers around the world — indeed all people — have all sorts of biases in how they process information and act on it, introducing plenty of problems into policy making.

Of course, governments should be better at making rational decisions than Joe Bloggs, as they have armies of experts and other smart people to call on. But research shows public officials and experts are indeed vulnerable to cognitive biases. There’s even evidence that their increased confidence might make them worse at some things.

In an effort to help public servants and politicians focus on what really matters, UK think tank the Institute for Government and the Behavioural Insights Team have released a report applying behavioural insights research to government’s own processes.

“Anyone who has worked in government for even a short time will be aware that its structures and processes powerfully shape behaviour — perhaps without us realising,” says Head of the UK Civil Service Sir Jeremy Heywood in the foreword.

“Over the past decade we have seen how behavioural insights can offer governments new options for addressing policy problems, often at low cost. This report now sets out a series of practical ways that can improve the way governments themselves work.”

Our most common biases

The institute highlights a few key biases that can derail decision making:

Allocation of attention — certain issues are more likely to gain attention from policy makers, regardless of their level of importance.

Framing — the way an issue is presented can affect how we understand it. For example, a study with politicians and public servants showed they are more likely to opt for a risky solution when it was presented in terms of how many deaths it might prevent, rather than lives it would save.

Confirmation bias — we look for information that confirms our existing views. A study of Danish politicians found they were good at interpreting data about ‘school A’ and ‘school B’ but bad when the schools were labelled ‘public’ and ‘private’. Their own views shaped what they believed the data showed.

Group reinforcement — some members of a group will self-censor to avoid disagreeing with the majority. For example, the Chilcot Inquiry found that the UK government did not subject its deliberations around going to war in Iraq to enough challenge, leading to bad decision making. Group dynamics can also lead to more extreme positions being taken as interlocutors reinforce each other’s opinions. In addition, the more group members that possess a piece of information, the more influence it has on the group decision, regardless of its actual quality or importance.

Inter-group opposition — when the pull towards group identity (and conformity) makes members reject arguments made by another group, whether or not it’s a good idea. Often this stems from an assumption the other group is acting in bad faith or from incorrect beliefs. This dynamic can even occur between different government agencies.

Illusion of similarity — people tend to assume more people share their own opinion than is the case, thus overestimating the support for certain policy ideas. Policy makers, who may have spent a long time involved with a particular issue, also commonly overestimate the level of understanding in the community about a particular policy and why it is important.

Optimism bias — the tendency to overestimate our own abilities, planning and chances of success. This can lead to making risky decisions without fully assessing what might go wrong.

Illusion of control — we often overestimate how much control we have over events. In complex policy areas, circumstances can change unpredictably due to outside influence, or even due to government action in other policy areas.

Pre-mortems and red-teaming

Knowing these biases exist isn’t always enough to effectively address them — but there are a range of techniques individuals and teams can use to improve how they make decisions.

There are a few principles that can help when building institutional infrastructure, the institute believes:

  1. Transparency — it’s easy for biases and poor decisions to go unchallenged if they’re made behind closed doors.
  2. Accountability — if it’s unclear which senior official in the department is responsible for the quality of evidence used to inform decisions, it’s easier for bias to slip through.
  3. Challenge in the system — it’s often hard to challenge those you work with every day, and challenging those above you — political or official — can hurt your chances of promotion. So every agency should ensure they have independent voices in place who can challenge their decision-making and ensure it is rigorous. This can also be done by other government bodies or panels of experts.

The report also includes a range of specific techniques to combat particular types of biases.

An intriguing practice is the pre-mortem, where decision makers imagine that their project has failed, and then work back to identify why things went wrong. This process encourages people to explore doubts, highlighting weaknesses that can then be addressed. There is emerging evidence that pre-mortems can be successful in real-world settings, but they are still not widely used in policymaking.

Red-teaming is a similar approach used in the UK military, but involves outsiders trying to find the weaknesses in a system. The Institute for Government thinks using outsiders might make it easy for staff to dismiss concerns, and instead suggests dividing the project or policy team into supporting and opposing sides to debate an idea. While people will often perceive those raising concerns in a normal setting as wreckers, structured critique approaches such as pre-mortems and red-teaming create the incentive to show how clever you are by demonstrating your ability to foresee different project narratives.

Bringing in wider perspectives through processes such as citizens’ juries can be useful, but should take place as early as possible to make them worthwhile. There isn’t much use in holding a citizens’ jury if the substantive policy decisions have already been made — after all, citizens’ juries do sometimes decide they don’t like the government’s preferred policy.

Assembling cognitively diverse teams can strengthen decisions, especially where creativity is required. Social markers of diversity — race, gender, socio-economic background — are one aspect of this. There’s also the tendency for people who think similarly to cluster together, so including people from different professional backgrounds can be useful.

Explicitly separating analysis of the problem from proposals for a solution can help prevent policy makers immediately jumping to a preferred solution and finding evidence to justify it.

When assessing evidence, one effective strategy is to consider the opposite. This involves asking ‘would you have made the same judgement if exactly the same study had produced results on the other side of the issue?’ There is quite consistent evidence that this strategy leads to a more objective assessment of the quality of evidence.

Keeping two estimates — one where your idea works well and one high cost, low impact scenario — can improve predictions, the report argues. It also suggests considering a ‘zero interest’ scenario — you may expect the public to be enthusiastic about your policy proposal, but what if there’s no reaction? Asking this question can help to come up with better contingency plans.

Building in opportunities to change course and revisit assumptions can also be useful for avoiding confirmation bias and path dependence. Conducting trials and building variations into policy execution where possible can be a useful way of tackling optimism bias.

Planning ahead can make it easier for government to respond when an issue suddenly gains prominence in the media. Thanks to consultation with experts and a subsequent in-depth research project on “the future of the sea” in 2016, the UK government was able to launch policy initiatives within several weeks of marine plastic becoming an issue with the release of David Attenborough’s Blue Planet II in late 2017.

“Public servants should hold themselves to the highest standards,” says Heywood.

“Projects like this one create new ways of making that happen. I welcome its findings and strongly support their translation into practice. … I am sure that these findings hold true for governments all around the world — and I look forward to exchanging examples of how they have been applied in diverse contexts.”

About the author
Premium

The essential resource for effective public sector leaders

Can you afford to miss the next briefing from Mandarin Premium? Sign up today.

Get Premium Today