High quality evidence is crucial for all human services evaluations. Rigorous research with considered implications for the mainstream service system can inform policy formation and help develop innovative services. In the youth homelessness field, a promising evidence base is forming on the ‘youth foyer’ model, an integrated approach to tackling youth homelessness that connects affordable accommodation to training and employment. While there is growing support from government for the development and funding of foyer programs, investment in high quality research that evaluates the model’s effectiveness continues to lag behind. This has significant implications for establishing a strong evidence base.
Constructing an evidence base
We assessed the quality of 15 primary Australian and international studies that examined the effectiveness of youth foyer or foyer-like programs on the lives of young homeless people. We initially set out to determine whether such models were effective. However, the uneven quality of the evidence base (that is, the robustness of the evaluation studies we found) led us to re-think our assessment of the research. Our review instead explores two main issues with the evidence base. Firstly, the difficulty studies had validating claims of foyer effectiveness, and secondly, the limitations of research design and methodology.
Evidence of effectiveness is, of course, a slippery concept. In a recent post at Power to Persuade, Paul Cairney cautions that policymakers use a range of information sources to inform their decision making, including those that sit outside the hierarchy of scientific methods. Latour and Woolgar’s famous anthropological study of the scientific laboratory also demonstrates that facts aren’t simply uncovered, but are constructed through subjective decision making processes. In the murkier world of social policy, decision makers must make sense of conflicting and varied quality evidence.
While we acknowledge that there are different sorts of evidence, we focus on distinguishing between two particular uses of the term. The first use of the term indicates the strength of the evidence, and whether evidence is strong or weak. That is, whether findings show that a program or policy is effective or not. The second use of the term indicates the quality of the evidence, and whether the quality is high or low. This can be defined as the way evidence is gathered and reported and whether it can be trusted or not trusted by an external audience. If the research is robust, then the quality of the evidence is high. Here we would like to focus on the latter– the quality of evidence.
The quality of evidence
We found that there is a need to lift the quality of the existing evidence base in order to properly assess whether foyer programs are effective. During the process of selecting and reviewing studies that evaluated this type of program, the lack of high quality evidence prevented us from assessing the strength of the evidence. So our attention turned to understanding why and how the evidence lacks rigour, and the implications of this for further research.
While most of the evaluation reports indicate that Foyer produces positive outcomes for young homeless people, these promising claims must be validated with more rigorous criteria. Claims made by the studies reviewed were often difficult to validate as they did not differentiate between outputs and outcomes. This made it difficult to identify long term program effects. Compounding this problem, many studies did not properly document the programs they evaluated, while some studies failed to outline their own evaluation methods. That made it difficult for us to verify claims these studies made about programs’ effectiveness. While it is possible that the programs that were evaluated in the studies we reviewed have been effective, this could not be verified in the presentation of the research literature.
The second issue that prevented the validation of research in this area related to research design limitations borne from external constraints (eg. funding). None of the studies reviewed were able to include a comparison group, and the majority of studies did not have a post intervention follow-up. Findings were typically presented from one time point of data collection along with other methodological limitations that impacted upon research validity. While this is not unusual in human services evaluation, they prevented the researchers of these studies from being able to conclude any causal inference that may be attributed to the intervention itself.
How can we improve the quality of evidence?
The lack of a rigorous evidence base is not unique to research on youth homelessness interventions. This problem is prevalent in most fields of human services evaluative research. There are three key implications for future research of human services evaluations in Australia. First, it is imperative to improve the rigour of studies and to lift the standard of evaluations. We know that the lack of high quality evaluations indicates inadequate funding and resources that otherwise may have enabled more rigorous research.
Second, in order to lift the quality of evidence, agencies and research bodies need to implement and embrace ground rules. These should include a system for ensuring high quality research design, appropriate documentations of programs and methods, use of a theoretical framework for the interpretation of findings, and a peer-review process.
Third, the lack of quality evidence also points to some service development gaps that are then mirrored in the corresponding research. It may be that some of the issues identified were a result of the lack of service delivery tools such as a program logic and theory of change for the program, or that the researchers didn’t have access to these documents. Therefore, the links between program outcomes and mechanisms couldn’t be identified in the existing evidence. This has important implications for not only the quality of evidence in human services research, but also for potential service development improvements based on evidence informed research.
As our review shows, there is a clear need for greater investment in research and evaluation on the foyer model, not only to enhance the rigour of research but as an integral component of program development.
This article was first published at the Power to Persuade blog.
The ANZSOG Annual Conference 2016 — August 1-3 in Sydney — will combine the latest research from international scholars and the insights of leading practitioners to explore how public servants can better serve citizens and government in the midst of great change. More details at the conference website.