Gearing up to address attrition: Cohort designs with longitudinal data

 

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Learning about focus groups from an RCT

 

In my previous job at 3ie I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. use an RCT to compare the performance of individual interviews against focus groups for collecting certain data.

Mining for development gold: Using survey data for program design

 

As global health resources become more scarce and the prevalence of international crises increase, it is more important than ever that we design and target development programs to maximize our investments. The complexity of the applicable social, political and physical environments must be taken into consideration. Formative research can help us to understand these environments for program design, but formative research is often skipped due to budgetary, time or safety concerns that constrain the collection of new data. What many overlook is the vast untapped potential of existing household survey data that are rigorously collected, clean and freely available online around the world. By mining existing survey data, we can conduct the formative research necessary to maximize development impact.

Should PEPFAR be renamed the “President’s Epidemiologic Plan for AIDS Relief”?

 

The effective use of data within PEPFAR has played a central role in getting us to the point where we can finally talk about controlling the HIV epidemic and creating an AIDS-free generation. PEPFAR’s transition from an emergency approach to one driven by real-time use of granular, site-level data to guide programmatic investments has contributed to achieving epidemic control. In view of this improved use of data, perhaps the “Emergency” in PEPFAR should now be changed to “Epidemiologic.”

Emojis convey language, why not a sampling lesson too?

 

To help folks build stronger sampling plans for their research and evaluation projects, we present a series of three sampling posts. This first blog post explains sampling terminology and describes the most common sampling approaches using emoji-themed graphics. Ready to get started? Sit down, hold on to your hats and glasses, and enjoy the sampling ride!

Beyond research: Using science to transform women’s lives

 

It was a warm spring day in 2011, and eight of my colleagues were helping me celebrate the realization of a long-awaited policy change in Uganda by sipping tepid champagne out of kid-sized paper cups. A colleague asked me, amazed, “How did you guys pull this off? What’s your secret to changing national policy?” I offered up some words about patience, doggedness, and committed team work. My somewhat glib response is still true, but since then I’ve thought a lot about what it takes to get a policy changed.

How to find the journal that is just right

 

Goldilocks had it easy. She only had three chairs, three bowls of porridge and three beds to choose from, and the relevant features were pretty straightforward. It is not so easy to pick the right journal for publishing your research. First, there are hundreds of journals to choose from. Second, there are various features that differentiate them. And finally, some journals, like the three bears, are predatory and should be avoided. So how to find the journal that is just right for your research?

Null results should produce answers, not excuses

 

I recently served on a Center for Global Development (CGD) panel to discuss a new study of the effects of community-based education on learning outcomes in Afghanistan. (Burde, Middleton, and Samii 2016) This exemplary randomized evaluation finds some important positive results. But the authors do one thing in the study that almost all impact evaluation researchers do – where they have null results, they make, what I call for the sake of argument, excuses.