An approach to rapid evidence review and what we learned when we piloted it

Annette Brown presenting the results of the rapid evidence review to her colleagues

Back in 2017 after attending a session at the Global Evidence Summit, I got very excited about rapid evidence reviews. I wrote a post about what I learned in that session here. At the end of that post, I challenged myself to develop an approach to rapid evidence reviews that can address immediate programming questions using fewer resources than other rapid review approaches while still minimizing bias. I had some ideas in my head even then, particularly to make better use of systematic reviews and introduce evidence maps into the approach, but it wasn’t until 2019 that I was able to flesh out the approach and pilot it. In this post and companion posts I present what I learned from this pilot about rapid evidence reviews and also what my colleague and I learned about the evidence base for addressing the dual burden of malnutrition, the topic of the rapid review.

Why we need rapid evidence reviews
A primary challenge for good evidence use… is the difficulty drawing conclusions from existing research in an unbiased way when there is a limited period of time…
My motivation for developing a new rapid evidence review methodology (or more modestly, to adapt existing approaches) was to improve evidence use at my own organization, a large international NGO. A primary challenge for good evidence use in the work of all program implementing organizations is the difficulty drawing conclusions from existing research in an unbiased way when there is a limited period of time and limited staff resources for a given evidence need. Too often when practitioners need evidence for proposals or projects, they present “best practice” knowledge or use evidence from experience without a review of the academic literature. Sometimes proposal and project teams do conduct literature reviews, but do this through Google searches, website searches or snowball searches from previously identified studies. While these ad hoc search methods can find many articles, the selection of articles is biased. Those that pop up on the first page of a Google search, for example, may be those mentioned the most in social media, which does not necessarily reflect the quality of the study or the representativeness of the findings.

It is not possible in these situations to conduct “gold standard” evidence reviews – systematic reviews and evidence maps using Cochrane-Campbell methods. (For an introduction to systematic reviews, see my colleagues’ post.) Systematic reviews are extremely time and resource intensive, often employing numerous reviewers and taking 18 months or more to complete. Evidence maps also follow Cochrane-Campbell standards for search and screening but typically stop short of synthesis. (Learn more about evidence maps here.) Evidence maps are also relatively time and resource intensive. Depending on how much time is spent developing the framework for the evidence map, the search, screening and coding usually takes at least six months and several reviewers.

…the growing volume of these products in the literature provides the “shortcut” for rapid evidence reviews.
However, while it is rarely possible to produce our own systematic reviews or evidence maps of primary studies for internal evidence needs, the growing volume of these products in the literature provides the “shortcut” for rapid evidence reviews. According to the World Health Organization practical guide on rapid reviews (Tricco, Langlois and Straus, 2017), rapid reviews “represent a knowledge generation strategy. They synthesize findings and assess the validity of research evidence using ‘abbreviated’ systematic review methods, modifying these methods to generate evidence in a short time” (p. 6). Rapid reviews can be conducted by abbreviated systematic review methods in different ways or to a different degree, but the guidelines suggest that all rapid reviews follow the key tenets of systematic reviews, including having a clearly stated review question or objectives, predefined eligibility criteria, critical appraisal or assessment of risk of bias, and a “systematic presentation and synthesis of the results”.

Our approach to rapid evidence review
The goal of this methodology is to produce reviews based on unbiased searches in a four-to-six week period using only a few researchers.
The goal of this methodology is to produce reviews based on unbiased searches in a four-to-six week period using only a few researchers. The methodology begins with a review of reviews, which is consistent with many rapid review approaches. It then incorporates primary studies identified both from the included studies in the systematic reviews and from primary studies identified in evidence maps. Certainly, the number of included studies from existing systematic reviews and evidence maps will be much lower than the number of primary studies generated with a full search strategy. Those strategies typically yield multiple thousands of records that cannot be screened with a small team in a short period of time. The idea here is that a group of studies resulting from other systematic searches will be less biased than a standard literature review where someone identifies studies based on what she has already read or what others are citing.

This methodology employs the following abbreviation strategies – compared to systematic reviews and evidence maps – for search and screening.

  • The index search strategy (e.g., Medline, PubMed, Academic Search) includes search terms to focus on systematic reviews and evidence maps. This greatly reduces the number of hits that then need to be screened.
  • The selection of indexes to search is more judicious than for a full systematic review.
  • The title and abstract screening is conducted by a single reviewer with a second reviewer available when there are questions.
  • The search of websites is combined with title and abstract screening so that only those passing title and abstract screening are uploaded to the screening software.
  • There is no snowball searching.
  • There is no expert searching.

One aspect of systematic reviewing where I didn’t want to cut corners in developing this methodology is critical appraisal. To inform our projects, we do not just need an unbiased selection of evidence, we need those studies to be high quality. Systematic reviews and evidence maps vary widely in quality, and the studies they include vary widely in quality. In this methodology, after systematic reviews are selected based on the other screening criteria, the reviewers critically appraise the included reviews using the ROBIS tool (Whiting et al., 2015). I selected this tool because it allows for greater judgment on the part of the appraiser to determine the level of concern. Other tools use more restrictive questions that are better suited to reviews of medical studies.

To ensure that the review takes no longer than six weeks, the output of the review may differ depending on how large the current evidence base is.
To ensure that the review takes no longer than six weeks, the output of the review may differ depending on how large the current evidence base is. If there are many relevant systematic reviews, the rapid review will be mostly a review of reviews. If there many reviews that are not directly relevant but include relevant primary studies, the rapid review will focus on an evidence map of the primary studies extracted from reviews. Similarly, if there are few reviews but with many relevant included studies, the rapid review will focus on an evidence map of primary studies. Finally, if there few relevant reviews and a small number of primary studies from the reviews, the rapid review should be able to provide a narrative synthesis of the primary studies along with an evidence map for them.

Overview of the pilot

My research intern, Jacqueline Shieh, and I piloted the methodology in the summer of 2019. We started by identifying an internal “client”, in this case two colleagues who were preparing for an anticipated solicitation for a program to address the dual burden of malnutrition in a Middle Eastern country. The review questions we agreed with the client were: What evidence do we have about programs and policies that address the dual burden of malnutrition, especially double-duty interventions but also interventions that focus specifically on overweight and obesity, in low- and middle-income countries (LMICs)? What evidence do we have about programs and policies that address nutrition transition (from undernutrition to overweight)?

This rapid evidence review was conducted by the two of us between May 28 and July 8, 2019 (six weeks). We held an initial meeting with the clients to discuss the evidence needs and a second meeting to confirm our understanding of the review question and desired scope. We developed a search strategy using PubMed to test different search string combinations. We worked with a search librarian to identify the most relevant indexes and then to run the searches. I should note here that I am extremely lucky to be working at a program implementer that is also a research organization, so we have an academic library and fabulous librarians. As noted above, the search focused on systematic reviews and evidence maps.

210 systematic reviews and umbrella reviews (i.e., systematic reviews of systematic reviews) passed the full-text screening stage. It was not possible to code that many studies in the limited period of time. We screened again using criteria to identify those most relevant for our clients’ program design purposes. We ended up coding 12 umbrella reviews and 66 systematic reviews. We produced a report that provided multiple evidence maps and several tables based on the coding of the umbrella reviews and systematic reviews. We provided summaries of findings for several subsets of reviews. We also coded all the primary studies conducted in Middle Eastern countries that appeared as included studies in the systematic reviews and conducted some analysis of those primary studies.We summarize the findings on the evidence base here and provide the summaries of selected reviews focusing on LMICs here.

What we learned about rapid reviewing
Sector makes a big difference.
Sector makes a big difference. We started the pilot process letting potential clients determine the topic, and we ended up with a health topic. Health researchers have been conducting systematic reviews for many years, so a search just for systematic reviews is still going to return a large number of hits. That is both a bad thing and a good thing for a rapid review – a lot to screen and code but a lot of possibly useful evidence. Also, because there are so many systematic reviews, there are few evidence maps in health. In fact, we didn’t find any for our topic. The disadvantage of this is that evidence maps often capture primary studies from a much broader search and include a higher number of studies than a systematic review.

Scope makes a difference too.
Scope makes a difference too. Our clients were primarily interested in evidence about programs designed to address the dual burden of malnutrition. Our initial “sloppy search” to better understand the scope suggested that we would not find a lot that combines malnutrition with overweight and obesity. Our clients already knew the evidence for malnutrition programming well, but did not know the evidence for overweight and obesity programs. So, they asked us to include in the scope of our review studies that looked at obesity only. It turns out there has been A LOT of research on obesity interventions, especially in high-income countries. In fact, there is a journal called Obesity Reviews filled with systematic reviews about obesity.

Due to the sector and the scope of our rapid review question, we ended up with a much higher number of systematic reviews, including several umbrella reviews than the methodology anticipated. While we were able to squeeze into the six-week review a small subset of included primary studies, our rapid review was flooded with systematic reviews, and especially systematic reviews on obesity interventions. Another disadvantage of the large number of included reviews was that we were not able to conduct critical appraisal of all of them.

Systematic reviews rarely provide program implementation details.
Systematic reviews rarely provide program implementation details. While systematic reviews are great when it comes to learning what we can conclude based on existing evidence, they typically focus more on describing the features of the research in the included studies and not the features of the programs evaluated in the included studies (especially in the journal article versions). For example, it was useful for our colleagues to see the evidence suggesting that family-based interventions are generally effective for reducing overweight and obesity, but the note one colleague wrote in red in the margin of the rapid review report is “What are the programs??” Not all primary studies have good program descriptions, but they are much more likely to than systematic reviews. For this reason, rapid evidence reviews for program design should pay attention to defining sector and scope to allow for a review of primary studies during the limited review period.

Users want guidance not menus.
Users want guidance not menus. At the end of the six weeks we provided our colleague clients with a 33-page report and links to extensive supplemental materials. We included tables that provided all the information we coded for all the included reviews. Our idea was that they could use these tables to identify what studies they wanted to read in detail. Our colleagues were overwhelmed. They greatly appreciated the number of summaries of review findings that we wrote, but they also wanted pointers to what they should go read themselves. They didn’t expect the report to tell them how to design their project, which I greatly appreciated, but they did want more guidance for what reviews and studies they should read.

Final comments

I have reported that we conducted this review with two people plus a search librarian from beginning (first meeting with the clients) to end (report and supplements delivered plus presentation prepared) in six weeks. Of course, resources are not just number of people and duration, but also hours. We captured that information as well. Jacqueline, my intern, spent 132 total hours over that period, and I spent 72 hours. That includes hours we spent outside of the regular working day and on the weekends. (Covidence is great for screening titles and abstracts on your mobile phone when you can’t sleep!) I think it is fair to say that Jacqueline and I were pretty amazed at what we managed to produce during that period. I cannot reveal what those hours added up to in financial terms for FHI 360, but folks felt that it was a reasonable investment.

I came out of it optimistic that a few people who understand systematic search and screening methods combined with a good library can produce rapid evidence reviews that can be useful to practitioners and policy makers making programming decisions under time constraints.


Photo caption: Annette Brown presenting the results of the rapid evidence review to her colleagues
Photo credit: Corey White/FHI 360

Sharing is caring!