What I learned about evidence translation and policy influence: Reflections from the Global Evidence Summit 2024

Photo credit: Looker Studio / Adobe Stock

Introduction

In this post, I focus on evidence translation, highlighting key takeaways, principles, and tools for evidence translation from the GES 2024.
This September, I had the opportunity to attend the Global Evidence Summit (GES) in Prague, Czechia (10 – 13 September 2024). The summit is “a multi-disciplinary and cross-cultural platform, which provides the opportunity for delegates and speakers to exchange ideas about how best to produce, summarise [sic] and disseminate evidence to inform policy and practice, and using that evidence to improve people’s lives across the world.”

Using evidence to improve people’s lives is an area of particular interest for me, having dedicated much of my career to translating evidence into decision-making. Previously, colleagues and I have written about the conference, discussing the role of artificial intelligence in evidence synthesis, evidence in a post-truth world, and evidence networks.

In this post, I focus on evidence translation, highlighting key takeaways, principles, and tools for evidence translation from the GES 2024.

Take a hard look at what doesn’t work in evidence translation

The panelists challenged the researchers in the room to take a more human approach to turning research into action.
Perhaps my favorite plenary session from the entire conference was “advocating for greater evidence communication and use of evidence,” chaired by Dr. Nasreen Jessani and Verónica Osario Calderón, where panelists enjoyed a “fireside chat” style conversation about what works, and what doesn’t, in evidence translation. The panelists challenged the researchers in the room to take a more human approach to turning research into action.

“The traditional approach of ‘do my research; publish my paper; spend 1 month developing a policy brief; walk into someone’s office and share it’ just has never worked,” shared Dr. Devaki Nambiar. Rather, as researchers we should recognize we are not the only “star in the sky” for a policymaker, and it is up to us to provide context for how our research fits into the numerous other topics filling their time every day. Further, Dr. Nambiar challenged the audience to recognize evidence gaps and be honest about the questions we cannot answer. “We might be able to answer lots of questions, but we sometimes can’t answer the question that is asked of us – which is the most important question,” he said.

Dr. Carla Saenz of the Pan American Health Organization (PAHO) spoke to this theme of evidence gaps in a session about empirical versus ethical questions. Evidence can answer empirical questions, like “are people racist?” but ethics can answer ethical questions, like “should people be racist?” When there is a lack of evidence, we can rely on ethics to guide decision-making, but we must be transparent about the evidence gap. She pointed to evidence communication during the COVID-19 pandemic as a good example of this:

“To some extent COVID made us think more about these issues; we all struggled with the huge amount of uncertainty early in the pandemic. At first there was very little evidence, so the role of values in the decision-making process was much more important. We very often conflated value-judgement statements versus evidence, and we presented things as a package: ethics-based recommendations and evidence-based recommendations…” As we gained more evidence this led to changing public recommendations, and “this change of recommendations led the public to distrust the evidence.”

Speakers across multiple sessions agreed that translating evidence for decision-making cannot be a formulaic, end-of-the-process activity. They encouraged researchers to recognize the broader constellation of issues decision-makers face daily and situate themselves within this broader universe of evidence. Researchers also should communicate clearly about the strength of the evidence they have, and the limitations, as confusing ethical versus evidence recommendations can erode trust.

Start with the person, then bring the evidence

If the traditional model doesn’t work, how then should we approach evidence translation?

Laura Boeira, executive director of Instituto Veredas in Brazil, suggests starting with the person. “My career in evidence translation started by asking a fellow civil servant over a beer ‘how can I help relieve your burden?’” Decision-makers, just like researchers, have high demands on their time and limited resources to complete their work. Too often, suggested Boeira, we come with evidence of an important new issue that rightly should merit attention, but do not offer time-saving ways to start to address it. Evidence translation should not just be about highlighting problems, she explained.

Rather, researchers should start with a relationship and view evidence translation as a collaborative effort.

Rather than focusing on the “communication of evidence,” presenters instead encouraged the “exchange of knowledge.”
In another session on “Practical reflections of embedded co-production in South Africa,” Promise Nduku of the Pan African Collective for Evidence challenged how “sometimes researchers come… on their high horses. As if we know the people, represent better the voice of the people, and this makes co-production difficult… Researchers assume technical expertise and thus become ‘custodians of rigor’. This is a risk to policy colleagues.”

Rather than focusing on the “communication of evidence,” presenters instead encouraged the “exchange of knowledge.” Recognize the capacity and knowledge that already exists in the institution you are speaking to and have a conversation to understand the deeper meaning of research findings within that context. “Co-production is not romantic or idealized… It needs to be managed—even engineered—into existence” explained Nduku.

Lastly, one practical tool suggested by Dr. Nambiar is providing decision-makers a public stage to speak on an issue. “This becomes a learning moment for them; we as scientists have a convening power, so we can use this as a strategy for sharing knowledge: giving them the stage and then letting them ask us questions.”

Presenters across multiple sessions stressed that evidence translation is successful when it is a human-first approach. As researchers we should start with understanding the person—their context, workload, and available resources—and engage them with honest, evidence-based, and transparent recommendations.

What about misinformation?

Collaboration and co-design in evidence creation inherently leads to trust of the results.
During audience Q&A of the plenary “advocating for greater evidence communication and use of evidence”, and in multiple breakout sessions, participants raised the question of combatting misinformation. “If we start with relationships and trying to lighten their workload, isn’t that the same tactic that disinformation and misinformation peddlers take?” was a common question.

Panelists agreed, it is true that malicious actors pushing misinformation often utilize personal channels to get high-level buy-in. This is why we need to be transparent enough to convey what recommendations are supported by evidence versus ethics (according to Dr. Saenz) and rigorous enough to be trustworthy (according to Dr. Holger Shunemann, Dr. Isabel Mercier, and Dr. Ines Rojas in 3 separate sessions). Furthermore, collaboration and co-design in evidence creation inherently leads to trust of the results, as explained by Nduku.

Evidence translation tools and when to use them

As researchers, we should invest in evidence-translation tools that are demand-driven, and adaptable enough to keep pace with the growing body of research synthesis.
Evidence translation is more than just conversations, of course. Presentations also included demonstrations of tools, dashboards, and scorecards for evidence translation. These tools are useful, especially when used in conjunction with the lessons discussed above.

Ladislav Dusek, director of the Institute of Health Information and Statistics in the Czechia demonstrated an eHealth dashboard for policymaking, which revealed significant subnational variation of key health trends across the country. The dashboard, when combined with strong relationships with decision-makers in the Department of Health, supported the government to prioritize different health services in different communities. The tool was effective because of the demand-driven, relational nature of its development, according to Dr. Dusek. Similarly, Professor John Lavis, director of the McMaster Health Forum, challenged researchers at GES that “now is the time for rapid, collaborative decision-making.” Demand-side engagement early in the research process not only ensures the relevance of the evidence, but the translation of the evidence to a tool or format suitable for the end user.

Presenters stressed the importance of evidence translation tools in the age of AI. The growth of AI is opening many doors in research synthesis, and the rapid translation of research to actionable evidence requires the final step of presenting evidence through easy-to-navigate, user-friendly tools or resources. “We are at the cusp of a dramatic change in how we use evidence to address societal challenges,” explained Dr Lavis, who stressed that building infrastructure to quickly translate the plethora of results coming from science is increasingly important.

As researchers, we should invest in evidence-translation tools that are demand-driven, and adaptable enough to keep pace with the growing body of research synthesis.

Conclusion

The translation of evidence to decision-making is not a clinical, scientific process; it is interpersonal and human centered.
The GES 2024 truly inspired me to rethink how I approach evidence translation as a researcher. Presenters challenged the traditional model of “research, write, publish, create a 2-pager.” Rather, treating decision-makers as humans first, and recognizing the broader context of our research findings can lead to more successful evidence uptake. Tools for translating research are important, especially in the context of AI, the growing quantity of research synthesis, and the rise of misinformation and social distrust.

Perhaps the most engaging presentation on this topic came from Okwen Patrick of eBASE International, whose booming voice told a “tale of two rivers”:

“Once upon a time there was a river. It was a river of forest plots, regression coefficients, chi squares, and violin plots. And as the people in the community tried to draw from the river, they fell in and were lost. The people saw the river, saw it was full, and saw it was not for them.”

Research is meant to be rigorous, and it is critical we retain this level of scientific integrity in the age of misinformation. However, the translation of evidence to decision-making is not a clinical, scientific process; it is interpersonal and human centered. As Dr. Patrick concluded: “[Evidence translation] is a combination of science and art, and when done properly it brings people together.”

Sharing is caring!