From the Red Book to the Blue Book: Advancing HIV surveillance among key populations

 
Tags: , , ,

The World Health Organization (WHO) recently published new global guidelines on how to do biobehavioral surveys related to HIV infection in a document known as the Blue Book. The Blue Book replaces the previous Red Book of guidelines for such surveys. The updated guidance keeps the global health community abreast of the evolving HIV epidemic, which has led to 37 million people currently living with HIV infection. Biobehavioral surveys provide population-level estimates for the burden of HIV disease and HIV-related risk factors, and they allow estimation of the coverage of prevention and treatment services for key populations that are at increased risk for HIV. Advances in available data and changes in the epidemic rendered the survey tools and guidelines in the Red Book out-of-date. In this blog post, we’re going to highlight how the new Blue Book addresses these critical gaps to deliver a manual better suited to the era of ending AIDS.

5 lessons for using youth video diaries for monitoring and evaluation

 
Tags: , , , ,

How do you measure the process of change that young people undergo as they engage with a program as part of that program’s monitoring and evaluation (M&E)? The Sharekna project in Tunisia uses youth video diaries to gain insight into the transformations that youth make as they develop resilience against external stresses like violent extremism. In this blog post, I provide five lessons from our Sharekna project to guide future M&E and research activities using or considering the use of youth video diaries.

Exploring the parameters of “it depends” for estimating the rate of data saturation in qualitative inquiry

 
Tags: , ,

In an earlier blog post on sample sizes for qualitative inquiry, we discussed the concept of data saturation – the point at which no new information or themes are observed in the data – and how researchers and evaluators often use it as a guideline when designing a study.

In the same post, we provided empirical data from several methodological studies as a starting point for sample size recommendations. We simultaneously qualified our recommendations with the important observation that each research and evaluation context is unique, and that the speed at which data saturation is reached depends on a number of factors. In this post, we explore a little further this “it depends” qualification by outlining five research parameters that most commonly affect how quickly/slowly data saturation is achieved in qualitative inquiry.

Don’t spin the bottle, please: Challenges in the implementation of probability sampling designs in the field, Part II

 
Tags: ,

In part I, I reviewed how the principles of probability samples apply to household sampling for valid (inferential) statistical analysis and focused on the challenges faced in selecting enumeration areas (EAs). When we want a probability sample of households, multi-stage sampling approaches are usually used, where EAs are selected in the first stage of sampling and households then selected from the sampled EAs in the second stage. (Additional stages may be added if deemed necessary.) In this post, I move on to the selection of households within the sampled EAs. I’ll focus on the sampling principles, challenges, approaches and recommendations.

Sample size is not king: Challenges in the implementation of probability sampling designs in the field, Part I

 
Tags: ,

So you want a probability sample of households to measure their level of economic vulnerability; or to evaluate the HIV knowledge, attitudes, and risk behaviors of teenagers; or to understand how people use health services such as antenatal care; or to estimate the prevalence of stunting or the prevalence and incidence of HIV. You know that probability samples are needed for valid (inferential) statistical analysis. But you may ask, what does it take to obtain a rigorous probability sample?

Turning lemons into lemonade, and then drinking it: Rigorous evaluation under challenging conditions

 
Tags: , , , ,

In early 2014, USAID came to the ASPIRES project with a challenge. They requested that our research team design and implement a prospective quantitative evaluation of a USAID-funded intervention in Mozambique. The intervention centered on a combined social and economic intervention for girls at high risk of HIV infection. As a research-heavy USAID project focused on the integration of household economic strengthening and HIV prevention/treatment, ASPIRES was prepared for the task.

The challenges, however, came in the particulars of the evaluation scenario. The research team set its mind to identifying the best possible design to fulfill the request. This is to say, we sought out a recipe for lemonade amidst these somewhat lemony conditions.

Riddle me this: How many interviews (or focus groups) are enough?

 
Tags: ,

The first two posts in this series describe commonly used research sampling strategies and provide some guidance on how to choose from this range of sampling methods. Here we delve further into the sampling world and address sample sizes for qualitative research and evaluation projects. Specifically, we address the often-asked question: How many in-depth interviews/focus groups do I need to conduct for my study?

A pathway for sampling success

 
Tags: ,

The credibility and usefulness of our research and evaluation findings are inextricably connected to how we select participants. And, let’s admit it, for many of us, the process of choosing a sampling strategy can be drier than a vermouth-free martini, without any of the fun. So, rather than droning on comparing this or that sampling strategy, we present a relatively simple sampling decision tree.

Gearing up to address attrition: Cohort designs with longitudinal data

 
Tags: , , , , ,

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Learning about focus groups from an RCT

 
Tags: , , ,

In my previous job at 3ie I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. use an RCT to compare the performance of individual interviews against focus groups for collecting certain data.

Emojis convey language, why not a sampling lesson too?

 
Tags: ,

To help folks build stronger sampling plans for their research and evaluation projects, we present a series of three sampling posts. This first blog post explains sampling terminology and describes the most common sampling approaches using emoji-themed graphics. Ready to get started? Sit down, hold on to your hats and glasses, and enjoy the sampling ride!