Improving Improvement Reviews of Evidence

Written by Steve MacGillivray

In the early stages of the development of the Scottish Improvement Science Collaborating Centre (SISCC) endeavour I had a conversation with Prof Huw Davies about any previous review work he had been involved in or was aware of that would inform our activities. I was surprised when Huw said: “Steve, if you apply the usual approach you take to conducting systematic reviews, any review you go on to conduct in this area will be far better than most that exist”. At the time I felt that this was a very kind statement and one which was unlikely to be true. Huw is a giant in the field and has contributed substantially to the knowledge base in the area. Essentially Huw was indicating that he believed the field to be generally lacking in rigorous approaches to the assembly and critique of published literature in Improvement Science. Other authors too have noted that “a significant body of work in the area of improvement has taken the form of editorial commentary, narrative review, or philosophical analysis rather than empirical studies

Perhaps it is of no surprise that this is the case since Improvement initiatives are not necessarily concerned with generating “new knowledge” but are rather more focussed on the business of “producing change”. However, even if an initiative only seeks to effect change and does not explicitly aim to generate new knowledge, we would still argue that lessons can be learned from such activity which is of value beyond the immediate sphere of any particular improvement programme or initiative. If we are to value the lessons learned from any such individual initiatives and move beyond commentary or opinion, then there is a need to focus on developing methods which allow unbiased assembly of multiple improvement studies. As many systematic reviewers within traditional health services research can testify, the assembly of multiple studies on any particular topic rarely results in the inclusion of studies which provide neat findings resulting from homogenous populations, contexts, interventions, or study methods. Complexity is the norm within academic health research and is increasingly something that systematic reviewers have to grapple with.

The same is true for Improvement Science. In recent years, the Effective Practice and Organisation of Care (EPOC) Group have been preparing and reporting systematic reviews “of educational, behavioural, financial, regulatory and organisational interventions designed to improve health professional practice and the organisation of health care services”. They have thus far published 110 full systematic reviews and 56 protocols. EPOC have also been developing new methods for the assembly and synthesis of study data in this area. If we are to learn lessons from the very large number of improvement studies that have been conducted and that are currently underway or planned, then we need to continue to develop and improve systematic review methods in this area.

Steve MacGillivray PhD

Senior Lecturer in Evidence Synthesis
Group Lead Evidence Synthesis Training and Research Group (STAR Group)
School of Nursing and Health Sciences
Centre for Health and Related Research
University of Dundee
Affiliate member of Social Dimensions of Health Institute
Affiliate member of the Long Term Conditions Group
Affiliate member of the Mother and Infant Research Unit
Systematic Reviewer: Scottish Improvement Science Collaborating Centre

Comments are closed.