Riddle me this: How many interviews (or focus groups) are enough?

 

The first two posts in this series describe commonly used research sampling strategies and provide some guidance on how to choose from this range of sampling methods. Here we delve further into the sampling world and address sample sizes for qualitative research and evaluation projects. Specifically, we address the often-asked question: How many in-depth interviews/focus groups do I need to conduct for my study?

Why should practitioners publish their research in journals?

 

My team recently encouraged a colleague who is collecting and analyzing data in one of our projects to submit the analysis and results to a journal. His response was “why?” He argued that his audience is not reading journals, that there are other mechanisms for getting feedback, and that since he is a technical person and not a researcher, he doesn’t need citations. My colleague believes publishing in other ways is equally valued. This post is my attempt to convince him, and you, that practitioners who are conducting research should publish in journals.

Academia takes on global health

 

The Consortium of Universities for Global Health (CUGH) held their 8th Annual Conference in Washington DC this week. More than 1,700 people from every corner of the globe gathered for three days to explore how the world’s academic institutions can best contribute to improving global health. This year’s meeting was particularly interesting given the contrast between current prospects for financial support for global health and the trajectory of support over the last 15 years. That contrast made several of the key topics discussed at CUGH even more salient to me.

7 takeaways from changes in US education grant programs

 

I recently had the opportunity to attend a workshop on the U.S. Department of Education’s (ED) new Education Innovation and Research (EIR) grant competition. EIR is the successor to the Investing in Innovation (i3) grant program, which invested approximately $1.4 billion through seven competitions from 2010 to 2016 to develop, validate and scale-up evidence-based programs in education. Like i3, EIR implements a tiered award structure to support programs at various levels of development. This blog post summarizes my seven takeaway points from the workshop. These seven points highlight the main changes in the transition from i3 to EIR.

Evaluation ethics: Getting it right

 

My personal interest in evaluation ethics goes back to my days at MDRC, where I was responsible for developing survey questions and accompanying protocols to capture domestic violence among mothers who participated in a DHHS-funded welfare-to-work program called JOBS (Job Opportunity and Basic Skills). MDRC was about three and a half years into a five-year evaluation of JOBS when our program officer asked us to include questions specifically about domestic violence in the next wave of our survey. As I recall, no one wanted to touch this – too sensitive, too volatile, too many ethical loopholes to jump through – so, as the lowest rung in the food chain at the time, I was given the task. With that, I entered the world of evaluation ethics where I learned quickly the challenges in getting it right, and the consequences of getting it wrong.

A pathway for sampling success

 

The credibility and usefulness of our research and evaluation findings are inextricably connected to how we select participants. And, let’s admit it, for many of us, the process of choosing a sampling strategy can be drier than a vermouth-free martini, without any of the fun. So, rather than droning on comparing this or that sampling strategy, we present a relatively simple sampling decision tree.

How many scientific facts are there about science, technology, and innovation for development?

 

There is a lot of excitement these days about science, technology, and innovation and the potential for these activities to contribute to economic and social development globally. The flurry of activity begs the question, how much of this excitement is supported by scientific facts? To help answer this question, the US Global Development Lab at USAID commissioned a project to create and populate a map of the evidence base for science, technology, innovation, and partnerships (STIP). As part of the project, scoping research was conducted to identify not just where there are evidence clusters and gaps, but also where the demand for new evidence by stakeholders is the greatest. In the recently published scoping paper, I and my co-authors analyze the data in the map together with the information from the stakeholders to recommend priorities for investment in new research on STIP. While there is good evidence out there, new research is necessary for strategies and programming to fully benefit from scientific facts. In this post, I briefly describe the research we conducted, summarize a few of the many findings, and list some of our recommendations.

Gearing up to address attrition: Cohort designs with longitudinal data

 

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Learning about focus groups from an RCT

 

In my previous job at 3ie I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. use an RCT to compare the performance of individual interviews against focus groups for collecting certain data.

Mining for development gold: Using survey data for program design

 

As global health resources become more scarce and the prevalence of international crises increase, it is more important than ever that we design and target development programs to maximize our investments. The complexity of the applicable social, political and physical environments must be taken into consideration. Formative research can help us to understand these environments for program design, but formative research is often skipped due to budgetary, time or safety concerns that constrain the collection of new data. What many overlook is the vast untapped potential of existing household survey data that are rigorously collected, clean and freely available online around the world. By mining existing survey data, we can conduct the formative research necessary to maximize development impact.

Should PEPFAR be renamed the “President’s Epidemiologic Plan for AIDS Relief”?

 

The effective use of data within PEPFAR has played a central role in getting us to the point where we can finally talk about controlling the HIV epidemic and creating an AIDS-free generation. PEPFAR’s transition from an emergency approach to one driven by real-time use of granular, site-level data to guide programmatic investments has contributed to achieving epidemic control. In view of this improved use of data, perhaps the “Emergency” in PEPFAR should now be changed to “Epidemiologic.”

Emojis convey language, why not a sampling lesson too?

 

To help folks build stronger sampling plans for their research and evaluation projects, we present a series of three sampling posts. This first blog post explains sampling terminology and describes the most common sampling approaches using emoji-themed graphics. Ready to get started? Sit down, hold on to your hats and glasses, and enjoy the sampling ride!

Beyond research: Using science to transform women’s lives

 

It was a warm spring day in 2011, and eight of my colleagues were helping me celebrate the realization of a long-awaited policy change in Uganda by sipping tepid champagne out of kid-sized paper cups. A colleague asked me, amazed, “How did you guys pull this off? What’s your secret to changing national policy?” I offered up some words about patience, doggedness, and committed team work. My somewhat glib response is still true, but since then I’ve thought a lot about what it takes to get a policy changed.

How to find the journal that is just right

 

Goldilocks had it easy. She only had three chairs, three bowls of porridge and three beds to choose from, and the relevant features were pretty straightforward. It is not so easy to pick the right journal for publishing your research. First, there are hundreds of journals to choose from. Second, there are various features that differentiate them. And finally, some journals, like the three bears, are predatory and should be avoided. So how to find the journal that is just right for your research?

Null results should produce answers, not excuses

 

I recently served on a Center for Global Development (CGD) panel to discuss a new study of the effects of community-based education on learning outcomes in Afghanistan. (Burde, Middleton, and Samii 2016) This exemplary randomized evaluation finds some important positive results. But the authors do one thing in the study that almost all impact evaluation researchers do – where they have null results, they make, what I call for the sake of argument, excuses.