Evidence in a post-truth world from Global Evidence Summit

 
Tags: ,

Recently I wrote a post about the Global Evidence Summit, which I attended in September. I commented that one thing I really liked about the conference is that it had some great plenary sessions. In this post, I highlight some of the key ideas that came from the fourth plenary of the conference, titled “Evidence in a Post-Truth World.”

The plenary started with Trish Greenhalgh, who is a Professor of Primary Care Health Sciences at the University of Oxford. Greenhalgh’s bio for the conference lists three areas her work covers, one of which is the complex links (philosophical and empirical) between research, policy and practice. Her speech focused on these philosophical links. She started with Aristotle’s On Rhetoric, in which he lays out three means of persuasion for an orator. The first is logos, which we can think of as evidence. The second is ethos, which we can think of as credibility, and the third is pathos, which we can think of as emotion. Greenhalgh suggested that the post-truth world is one where logos – evidence – is no longer a useful tool of persuasion, leaving us only with the other two.

What I learned about evidence networks at the Global Evidence Summit

 
Tags: ,

A couple of weeks ago, I was fortunate to attend the Global Evidence Summit (GES) in Cape Town, South Africa. GES was billed as being the first conference of its kind, jointly organized by the Cochrane Collaboration, the Campbell Collaboration, and several other groups to focus on evidence-based policy making across sectors. Those of us who attended the What Works Global Summit in London last September considered GES the second conference of its kind, and we were excited to reconnect with each other this year in Cape Town.

The conference brought together researchers and policy analysts in the fields of health, education, and international development to explore that long and often tortuous path between a single study and a policy or program that is evidence based. Sessions covered topics such as evidence mapping, systematic reviews, rapid reviews, standard and guideline setting, big data, and policy engagement. In this post, I report on what I learned about evidence networks in the first day’s plenary. For readers interested in learning more about the conference itself, I provide a quick review at the end of the post.

FHI 360’s R&E Search for Evidence quarterly highlights

 
Tags: ,

The Research and Evaluation Strategic Initiative team published 14 posts during the last quarter in our blog, R&E Search for Evidence. For those not familiar with our blog, it features FHI 360 thought leaders who write about research and evaluation methodology and practice, compelling evidence for development programming, and new findings from recent journal publications. We have published 31 original posts to date! In this post, I will summarize our most recent posts and highlight some of my favorites.

Big data and data analytics: I do not think it means what you think it means

 
Tags: ,

With so many players latching on to the idea of big data these days, it is inconceivable that everyone has the same definition in mind. I’ve heard folks describe big data as just being the combination of existing data sets while others don’t consider data to be big until there are hundreds of thousands of observations. I’ve even seen the idea that big data just means the increasing availability of open data. There is a similar challenge with data analytics. On one end of the spectrum, data analytics is just data analysis, but with a cooler name. On the other, data analytics involves big data (really big data) and machine learning. I needed to get a grasp on the various terms and concepts for my work, so I thought I’d share some of what I learned with you. Prepare to learn.

Why should practitioners publish their research in journals?

 
Tags: ,

My team recently encouraged a colleague who is collecting and analyzing data in one of our projects to submit the analysis and results to a journal. His response was “why?” He argued that his audience is not reading journals, that there are other mechanisms for getting feedback, and that since he is a technical person and not a researcher, he doesn’t need citations. My colleague believes publishing in other ways is equally valued. This post is my attempt to convince him, and you, that practitioners who are conducting research should publish in journals.

How many scientific facts are there about science, technology, and innovation for development?

 
Tags: , ,

There is a lot of excitement these days about science, technology, and innovation and the potential for these activities to contribute to economic and social development globally. The flurry of activity begs the question, how much of this excitement is supported by scientific facts? To help answer this question, the US Global Development Lab at USAID commissioned a project to create and populate a map of the evidence base for science, technology, innovation, and partnerships (STIP). As part of the project, scoping research was conducted to identify not just where there are evidence clusters and gaps, but also where the demand for new evidence by stakeholders is the greatest. In the recently published scoping paper, I and my co-authors analyze the data in the map together with the information from the stakeholders to recommend priorities for investment in new research on STIP. While there is good evidence out there, new research is necessary for strategies and programming to fully benefit from scientific facts. In this post, I briefly describe the research we conducted, summarize a few of the many findings, and list some of our recommendations.

Learning about focus groups from an RCT

 
Tags: , , ,

In my previous job at 3ie I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. use an RCT to compare the performance of individual interviews against focus groups for collecting certain data.

Null results should produce answers, not excuses

 
Tags: , , ,

I recently served on a Center for Global Development (CGD) panel to discuss a new study of the effects of community-based education on learning outcomes in Afghanistan. (Burde, Middleton, and Samii 2016) This exemplary randomized evaluation finds some important positive results. But the authors do one thing in the study that almost all impact evaluation researchers do – where they have null results, they make, what I call for the sake of argument, excuses.