On the first day of the Global Evidence Summit (GES) in Cape Town, South Africa, in spite of jet lag and conference exhaustion, I eagerly attended the late afternoon session titled, “A panoramic view of rapid reviews: Uses and perspectives from global collaborations and networks.” During my time working at the International Initiative for Impact Evaluation (3ie), I was converted into a true believer in systematic reviews. Before my conversion I knew that literature reviews suffered from bias due to a researcher’s selection of studies to include, but I was less aware of the established methods for conducting unbiased (well, more accurately, less biased) reviews. Systematic reviews were the answer! As I worked with and on systematic reviews, however, I became frustrated to see how much time and how many resources they can take. I was keen to learn more about rapid reviews. This session provided a great overview of rapid review approaches and some of the recent advances.
The first presenter was Karla Soares-Weiser, Deputy Editor-in-Chief of Cochrane, representing Cochrane Response. Cochrane Response is the Cochrane Collaboration’s “evidence consultancy unit.” The mission of the unit is to provide high-quality reports, based on systematic review research, in response to clients’ needs. The focus of Soares-Weiser’s presentation was Cochrane Response’s Targeted Update product. My description here is drawn from her presentation and from the information on the Cochrane Response website. A targeted update starts with an existing Cochrane systematic review that is relevant to the evidence needs of the client. The product is an update to that review (i.e., search and screening of literature published since the original review’s cut-off date) that targets a just subset of the original review, typically only one or two comparisons and not more than seven outcomes. A targeted update can be conducted in a four- to nine-week period and is presented in a four-page template that includes a summary of findings table. The figure below from the Cochrane Response website lays out the steps in the process. What wasn’t clear to me from the presentation is what happens when there isn’t a highly applicable existing Cochrane review from which to start.
The second presenter was Daniel Phillips, Research Director at NatCen Social Research, Associate Editor for the International Development Coordinating Group of the Campbell Collaboration, and former colleague of mine from 3ie. Daniel started by explaining that Campbell is the social sciences equivalent of, and sister organization to, the Cochrane Collaboration. (For those who don’t know either of them, they are non-profit organizations that organize and coordinate the work of large networks of researchers conducting systematic reviews. They set standards, publish reviews, and promote research uptake to policy.) Daniel pointed the audience to two useful resources: “A scoping review of rapid review methods,” by Tricco et al. published in BMC Medicine (2015) and the new WHO practical guidelines on rapid reviews being launched at the same conference.
- At the search stage, restrict the number of sources or types of sources searched
- At the screening stage, use fewer screeners or use technology to conduct some of the screening
- At the data extraction stage, use fewer data extractors or only double check a sample of the data extracted
- At the synthesis stage, employ narrative synthesis instead of meta-analysis
The next presenter, Susan Norris from the Guidelines Review Committee of the World Health Organization, told the story of WHO’s need for a rapid advance guideline on personal protective equipment during the Ebola epidemic. To inform the guideline, a team of researchers produced a rapid review of the evidence on personal protective equipment used in situations of viral hemorrhagic fevers. They conducted the review in seven weeks! You can read the review here. Even though they expanded the search criteria to include non-comparative studies and a broader set of disease outbreaks, they found very little evidence on which to base the guidelines and relied in part on a survey of returned health workers.
A subset of those researchers wrote a separate article outlining their recommendations for developing WHO rapid advice guidelines in general, including for the conduct of rapid reviews. In the GES presentation, Norris focused on the criteria for assessing the need for a rapid advice guidelines, which I reproduce here from the published article. The criteria are:
- What is the type of emergency and risk to public health?
- Is the event novel?
- Does uncertainty need to be urgently addressed?
- What is the anticipated time frame for the event?
- Will the rapid advice guideline be rapidly implemented?
The next speaker was Michel Laurence, the Co-Chair of the Guidelines International Network (G-I-N) Accelerated Guideline Development Working Group. The aim of the working group is to develop a manual for the development of guidelines or recommendations for clinical practice in six months or less. Laurence explained that G-I-N does not see accelerated guidelines as an interim product, so they should only be produced for a limited number of questions and where there is no major controversy. They can also be used as a tool for updating existing guidelines. The working group was meeting in conjunction with the GES conference to discuss the draft manual. One thing that Laurence emphasized is that stakeholder consultation is a vital step (dare I say a tonic?) for any guideline development and should not be skipped, even for accelerated guidelines.
The final speaker was Elie Akl from the Knowledge to Policy (K2P) Center at the American University of Beirut. Akl presented the example of the K2P Rapid Response product, which he called a “fit-for-purpose” rapid review. Like the Cochrane Response targeted updates, the K2P rapid responses are produced for clients to address specific questions. K2P has produced three examples of rapid responses so far, which are available on their website. As explained by Akl and described in the rapid response documents, K2P draws on systematic reviews as well as single research studies for these reviews, which can be produced in three, ten, or 30 days, to inform decision making on a timely basis. I admit that my conference exhaustion was really hitting me at this point in the session, but I remember one thing he said that stood out in my mind; he suggested that the authors of rapid responses could go back later and “face lift” the reviews for publication. The immediate question in my mind was, what if the more careful work of the face lift points to different recommendations than the original?
The general message I heard at this session is that the systematic review community understands that the perfect cannot be the enemy of the good, particularly when clinical practice and policymaking require evidence in a timely manner. But how fast is rapid? I was a bit dismayed that most of the models presented still take months rather than days to complete. On top of that, they can be very resource intensive. Setting up machine learning to speed up the screening process, for example, requires a heavy investment both in technology and in “teaching” the machine. The seven-week review of personal protective equipment for health care workers has 14 authors, all of which contributed equally according to the journal publication. So it was fast but extremely resource intensive. It also seems that rapid reviews are better able to combine speed and quality in the health sector where there already exist high quality systematic reviews. What about in other sectors?
I understand the reluctance to trade off quality for speed when developing recommendations that apply to general clinical practice or to high-level policymaking. But often we have urgent questions about program design for which want to review high-quality evidence in an unbiased way. The challenge I would like to take up is to develop an approach to rapid reviews for addressing immediate program questions, an approach that requires fewer resources but still emphasizes high-quality evidence and avoiding bias. What I learned at GES will help me get started.
For more highlights from the Global Evidence Summit, check out #GESummit17 on Twitter.
Next year the conference is called the Global Evidence and Implementation Summit and it will be held in Melbourne, Australia in October.