This year will see the Australian government pilot new ways to measure the impact of university research.
As recommended by the Watt Review, the Engagement and Impact Assessment will encourage universities to ensure academic research produces wider economic and social benefits.
This fits into the National Innovation and Science Agenda, in which taxpayer funds are targeted at research that will have a beneficial future impact on society.
Education Minister Simon Birmingham said the pilots will test:
“… how to measure the value of research against things that mean something, rather than only allocating funding to researchers who spend their time trying to get published in journals.”
This move to measure the non-academic impact of research introduces many new challenges that were not previously relevant when evaluation focused solely on academic merit. New research highlights some of the key issues that need to be addressed when deciding how to measure impact.
1. What should be the object of measurement?
Research impact evaluations needs to trace out a connection between academic research and “real world” impact beyond the university campus. These connections are enormously diverse and specific to a given context. They are therefore best captured through case studies.
When analysing a case study the main issues are: what counts as impact, and what evidence is needed to prove it? When considering this, Australian policymakers can use recent European examples as a benchmark.
For instance, in the UK’s Research Excellence Framework (REF) — which assesses the quality of academic research – the only impacts that can be counted are those directly flowing from academic research submitted to the same REF exercise.
To confirm the impact, the beneficiaries of research (such as policymakers and practitioners) are required to provide written evidence. This creates a narrow definition of impact because those that cannot be verified, or are not based on submitted research outputs, do not count.
This has been a cause of frustration for some UK researchers, but the high threshold does ensure the impacts are genuine and flow from high quality research.
2. What should be the timeframe?
There are unpredictable time lapses between academic work being undertaken and it having impact. Some research may be quickly absorbed and applied, whereas other impacts, particularly those from basic research, can take decades to emerge.
For example, a study looking at time lags in health research found the time lag from research to practice to be on average 17 years. It should be noted, though, that time lapses vary considerably by discipline.
Only in hindsight can the value of some research be fully appreciated. Research impact assessment exercises therefore need to be set to a particular timeframe.
Here, policymakers can learn from previous trials such as one conducted by Australian Technology Network and Group of Eight in 2012. This exercise allowed impacts related to research that occurred during the previous 15 years.
3. Who should be the assessors?
It is a long established convention that academic excellence is decided by academic peers. Evaluations of research are typically undertaken by panels of academics.
However, if these evaluations are extended to include non-academic impact, does this mean there is now a need to include the views of end-users of research? This may mean the voices of people outside of academia need to be involved in the evaluation of academic research.
In the 2014 UK REF, over 250 “research users” (individuals from the private, public or charitable sectors) were recruited to take part in the evaluation process. However, their involvement was restricted to assessing the impact component of the exercise.
This option is an effective compromise between maintaining the principle of academic peer review of research quality while also including end-users in the assessment of impact.
4. What about controversial impacts?
In many instances the impact of academic research on the wider world is a positive one. But there are some impacts that are controversial — such as fracking, genetically modified crops, nanotechnologies in food, and stem cell research — and need to be carefully considered.
Such research may have considerable impact, but in ways that make it difficult to establish a consensus on how scientific progress impacts “the public good”. Research such as this can trigger societal tensions and ethical questions.
This means that impact evaluation needs to also consider non-economic factors, such as: quality of life, environmental change, and public health. Even though it is difficult placing dollar values on these things.
5. When should impact evaluation occur?
Impact evaluation can occur at various stages in the research process. For example, a funder may invite research proposals where the submissions are assessed based on their potential to produce an impact in the future.
An example of this is the European Research Council Proof of Concept Grants, where researchers who have already completed an ERC grant can bid for follow-on funding to turn their new knowledge into impacts.
Alternatively, impacts flowing from research can be assessed in a retrospective evaluation. This approach identifies impacts where they already exist and rewards the universities that have achieved them.
An example of this is the Standard Evaluation Protocol (SEP) used in the Netherlands, which assesses both the quality of research and its societal relevance.
A novel feature of the proposed Australian system is the assessment of both engagement and impact, as two distinctive things. This means there isn’t one international example to simply replicate.
Although Australia can learn from some aspects of evaluation in other counties, the Engagement and Impact Assessment pilot is a necessary stage to trial the proposed model as a whole.
The pilot – which will test the suitability of a wide range of indicators and methods of assessment for both research engagement and impact – means the assessment can be refined before a planned national rollout in 2018.
Andrew Gunn is a researcher in higher education policy at University of Leeds.
Michael Mintrom is a professor of public sector management at Monash University.
This article was first published by The Conversation.