Policy models are powerful decisionmaking tools, but backward-looking bias and lax application mean they can also be seductive and misleading

By Claire Craig

Thursday August 1, 2019

How to think about the future. Getty Images

When used well, models can reveal things about society nobody realised — but used poorly, they can lead us astray, writes Dr Claire Craig.

Observational evidence is always about the past. Policymakers want to know the consequences of their decisions for the future. Some of the biggest misunderstandings between scientists, scholars, and policymakers in both private and in public debate come about because of confusion about the bases of the relationship between the two.

One link is through models. “Humans are natural modellers — we carry models of our world in our minds. Our memories are significantly comprised of a mental model of the world in which we live, and our personal history of our experiences within that world. We navigate by means of maps: mental maps and the physical maps that we create”.

These might be intuitive and personal mental models about how some aspects of the world work, or they might be explicit and shared constructs. The constructs might in turn be expressed in a variety of forms. They might be architectural models such as the physical model of St Paul’s Cathedral constructed for Sir Christopher Wren, or today’s 3-D computer designed and sometimes 3-D printed models. They include wonderful examples such as the physical hydraulic model of the macro-economic relationships between stocks and flows in an open economy created by William Philips in 1949. They may be conceptual models or they may be intuitive models of how families and social groups work, underpinning narratives such as, say, Persuasion by Jane Austen.

In broad terms, models are used for at least five purposes: prediction or forecasting, the explanation or exploration of future scenarios, understanding theory, illustrating or visualising a system, and analogy. They are always in some sense fictions, but they are fictions that simplify and abstract important properties of the actual object or system being modelled, to create insights or outputs useful for the purpose at hand.

In science, computational models form the basis of understanding of the largest systems, such as galaxies, and the smallest, such as cells. In society, to pick a few examples: models help design and run cities, manufacture cars, streamline business systems and the operation of hospitals and generate cleaner production processes.


READ MORE: A digital government twin can model effects of change over time. It could teach us what structures worked and what didn’t, optimising machinery of government


In public policy, the past and future are often bridged by the use of computational models, such as those of the climate, of the economy, or of the spread of an infectious disease. Here, models play a particularly important role because they not only create robust evidence, they also do it in a way that improves the quality of debate around that evidence. This is partly because they act as vehicles to convene groups: those who supply the data and the expertise, those who must inform and make decisions about the questions for which the model should be developed and to which it should be applied, those who make judgements about the assumptions that underpin the model, and those affected by the model’s outcomes.

This convening function means a wide group of stakeholders, with different forms of expertise can develop, challenge or use the model. However, these are often models of systems or, more frequently, parts of systems, which are different from the systems the policymaker would actually like to be able to model.

In 2013 the McPherson Review of the quality assurance of models in use in the UK national government found over 500 models with a significant influence on policy. They ranged from basic Excel spreadsheets constructed by in-house analysts to the models of global climate created by international teams and subject to extensive peer review through the Intergovernmental Panel on Climate Change. The review, set up after the errors that led to a major economic and political failure in the Department for Transport’s review of the West Coast rail franchise, highlighted several risks with the use of models. In particular, it can be very tempting and easy for a model created for one purpose to be stretched to apply to another to which it appears to fit but for which, in practice, it is less well suited and may even be misleading. Following the McPherson Review, the government introduced guidance aimed primarily at ensuring proportionate external review and challenge of business-critical models.

The scale and scope of the class of models that rely on computation is rapidly increasing and will continue to do so. Increasing volumes of data, smarter algorithms and cheaper computing power combine to increase the range of situations to which computational models can be applied and the reliability with which they can be used. They enable better understanding of complex systems, and create insight into the behaviours and possible futures of those systems.

Models are powerful, but can also be seductive and misleading. The basic lessons to ensure they are used wisely are to ensure that the data is good, that the policy client is engaged with the modelling experts throughout and that the assumptions behind and limitations of the model are well understood both by its designers and those who may act on its findings.


READ MORE: Service design for public policymakers. The value and challenges of service design in government, from exploring to creating to delivering


Machine learning techniques and other forms of data science are further extending the range of circumstances in which scientists and policy makers could and should use modelling and they have the potential to open up many new areas of policy insight, evidence and creativity. They also create some new challenges. One is that they enhance the range of circumstances under which seductively powerful findings, particularly when presented visually, can persuade decisionmakers or publics to believe in or act on relationships that are not sufficiently well understood. Poorly applied, they can increase the risk of confusion between correlation and causation, or reinforce the effects of unexamined bias in the data through statistical stereotyping.

The resolutions will be different in different circumstances. Where the technology is perceived as more mundane and the systems around it are trusted, societies and individuals are entirely comfortable relying on things that have been proven to work well but which they don’t understand. Most individuals don’t understand why an aeroplane flies safely and aspirin was used for pain relief for many years before there was any good description of how it worked.

Meanwhile, the options for policymakers to incorporate machine learning systems into accountable decision-making are likely to increase in the future. There is much research into developing machine learning systems that also provide explanations, interpretations or accounts of how they derived their findings. However, in some circumstances policymakers may have to decide to trade-off precision for accountability, as a system that can be interpreted may be less accurate than one which cannot.


READ MORE: Ethics by numbers: how to build machine learning that cares


Less often discussed is the extent to which qualitative mental models of how the world works form the basis of policy decisions and public debate: models of families, interpersonal relations, communities and nations. In addition, for decision-makers, the question may not be about how they believe the world works, but about what they believe about how the world should work, and how far to impose that belief on what the model tells them.

This conundrum is thrown into sharp relief by machine learning, with the widely discussed problems of statistical stereotyping. If a system only makes deductions about the future by learning from historic data, it will project historic patterns forward. If the data is biased (in the scientific sense of being skewed, rather than prejudiced) then the projections will be biased too. In these instances, the new uses of historic data are forcing or helping stakeholders to consider how their desired models of the future differ from the realities of the past, what the bases for those differences are, and how they can more explicitly design policies and practices that embed their chosen values and aspirations as well as drawing on the best data available. 

This extract is from How Does Government Listen to Scientists? (2019), published by Palgrave Macmillan, an imprint of Springer Nature. It is reproduced here with their permission.

About the author
0 Comments
Inline Feedbacks
View all comments
Premium

The essential resource for effective public sector leaders