Searching for social norms measures related to modern contraceptive use

Tags: , , ,

Social norms are a hot topic in development. While it seems intuitive that social norms can play a key role in influencing people’s behavior (in both negative and positive ways), the jury is still out on the best way for programs to address norms. We do know that in order to provide evidence on the power of norms change to improve behavior, we need effective ways to measure and monitor norms.

In 2017, as part of the USAID-funded Passages Project, we set out to systematically assess what empirical evidence exists around the relationship between social norms and use of modern family planning methods. You can read the results of our literature review in a new report published in Studies in Family Planning. In this blog post, we detail the methodology of our review and then provide recommendations for bringing greater consistency and comparability to social norm measures.

5 features of a monitoring, evaluation and learning system geared towards equity in education

Tags: , , , , ,

A great accomplishment arising from the era following 1990’s World Declaration on Education for All in Jomtein, Thailand, is recognition of the gender gap in education, and the mandate for sex-disaggregated reporting from funders and multilateral agencies. Data on the dramatic access and outcome disparities between male and female learners created demand for programming focused on gender inequity. Twenty-seven years after Jomtien, there is a substantial amount of evidence on solutions that build gender equity in education, and on how education systems need to adapt to help girls and boys overcome gender-related institutional barriers.

The Education Equity Research Initiative, a collaborative partnership led by FHI 360 and Save the Children, seeks to create the same dynamic around other aspects of inequity in education – be it poverty, ethnic or racial disadvantage, migration status, or disability. As a community, we create frameworks, modules, and tools, so that little by little the reams of data that get produced include a consistent set of questions around equity of program participation and equity of outcomes.

My previous blog post speaks to the need to be deliberate in building a monitoring, evaluation and learning system that generates the data and analysis that help answer the question: are we improving education equity through our programming and policy? But how do we operationalize equity in education, in the context of education development programming? In Mainstreaming Equity in Education, a paper commissioned by the International Education Funders Group, our collaborative begins by recognizing that an equity-oriented monitoring, evaluation and learning (MEL) system around a program or set of interventions has an essential purpose not just to produce data on scope and coverage, but to allow for depth of understanding around who benefits and doesn’t, and offer actionable information on what to do about it. Here I outline five features that describe such a learning system.

Beyond DALYs: New measures for a new age of development

Tags: , , ,

Integrated development is an approach that employs the design and delivery of programs across sectors to produce an amplified, lasting impact on people’s lives. Integrated programs are based on the premise that the interaction between interventions from multiple sectors will generate benefits beyond a stand-alone intervention. As human development interventions take this more holistic approach, funders and program implementers alike recognize the importance of understanding the impact of multi-sector interventions. While we can continue to use sector specific measures of impact – for instance, Disability-Adjusted Life Years (DALYS) – this creates an apples and oranges problem if one wishes to compare across interventions. It begs the question: can we move towards a single performance metric to assess effectiveness and cost-effectiveness of integrated programs? Here at FHI 360, we are attempting to answer this question by developing a new measurement tool – MIDAS (Measuring the Impact of Development Across Sectors).

In this post, we discuss the need for better measures, describe our conceptual framework, and then present some of the key components of the tool. We conclude by demonstrating how the tool worked when piloted in a current FHI 360 project.

From the Red Book to the Blue Book: Advancing HIV surveillance among key populations

Tags: , , ,

The World Health Organization (WHO) recently published new global guidelines on how to do biobehavioral surveys related to HIV infection in a document known as the Blue Book. The Blue Book replaces the previous Red Book of guidelines for such surveys. The updated guidance keeps the global health community abreast of the evolving HIV epidemic, which has led to 37 million people currently living with HIV infection. Biobehavioral surveys provide population-level estimates for the burden of HIV disease and HIV-related risk factors, and they allow estimation of the coverage of prevention and treatment services for key populations that are at increased risk for HIV. Advances in available data and changes in the epidemic rendered the survey tools and guidelines in the Red Book out-of-date. In this blog post, we’re going to highlight how the new Blue Book addresses these critical gaps to deliver a manual better suited to the era of ending AIDS.

How fast is rapid? What I learned about rapid reviews at Global Evidence Summit

Tags: , ,

On the first day of the Global Evidence Summit (GES) in Cape Town, South Africa, in spite of jet lag and conference exhaustion, I eagerly attended the late afternoon session titled, “A panoramic view of rapid reviews: Uses and perspectives from global collaborations and networks.” During my time working at the International Initiative for Impact Evaluation (3ie), I was converted into a true believer in systematic reviews. Before my conversion I knew that literature reviews suffered from bias due to a researcher’s selection of studies to include, but I was less aware of the established methods for conducting unbiased (well, more accurately, less biased) reviews. Systematic reviews were the answer! As I worked with and on systematic reviews, however, I became frustrated to see how much time and how many resources they can take. I was keen to learn more about rapid reviews. This session provided a great overview of rapid review approaches and some of the recent advances.

5 lessons for using youth video diaries for monitoring and evaluation

Tags: , , , ,

How do you measure the process of change that young people undergo as they engage with a program as part of that program’s monitoring and evaluation (M&E)? The Sharekna project in Tunisia uses youth video diaries to gain insight into the transformations that youth make as they develop resilience against external stresses like violent extremism. In this blog post, I provide five lessons from our Sharekna project to guide future M&E and research activities using or considering the use of youth video diaries.

Exploring the parameters of “it depends” for estimating the rate of data saturation in qualitative inquiry

Tags: , ,

In an earlier blog post on sample sizes for qualitative inquiry, we discussed the concept of data saturation – the point at which no new information or themes are observed in the data – and how researchers and evaluators often use it as a guideline when designing a study.

In the same post, we provided empirical data from several methodological studies as a starting point for sample size recommendations. We simultaneously qualified our recommendations with the important observation that each research and evaluation context is unique, and that the speed at which data saturation is reached depends on a number of factors. In this post, we explore a little further this “it depends” qualification by outlining five research parameters that most commonly affect how quickly/slowly data saturation is achieved in qualitative inquiry.

Don’t spin the bottle, please: Challenges in the implementation of probability sampling designs in the field, Part II

Tags: ,

In part I, I reviewed how the principles of probability samples apply to household sampling for valid (inferential) statistical analysis and focused on the challenges faced in selecting enumeration areas (EAs). When we want a probability sample of households, multi-stage sampling approaches are usually used, where EAs are selected in the first stage of sampling and households then selected from the sampled EAs in the second stage. (Additional stages may be added if deemed necessary.) In this post, I move on to the selection of households within the sampled EAs. I’ll focus on the sampling principles, challenges, approaches and recommendations.

Sample size is not king: Challenges in the implementation of probability sampling designs in the field, Part I

Tags: ,

So you want a probability sample of households to measure their level of economic vulnerability; or to evaluate the HIV knowledge, attitudes, and risk behaviors of teenagers; or to understand how people use health services such as antenatal care; or to estimate the prevalence of stunting or the prevalence and incidence of HIV. You know that probability samples are needed for valid (inferential) statistical analysis. But you may ask, what does it take to obtain a rigorous probability sample?

Turning lemons into lemonade, and then drinking it: Rigorous evaluation under challenging conditions

Tags: , , , ,

In early 2014, USAID came to the ASPIRES project with a challenge. They requested that our research team design and implement a prospective quantitative evaluation of a USAID-funded intervention in Mozambique. The intervention centered on a combined social and economic intervention for girls at high risk of HIV infection. As a research-heavy USAID project focused on the integration of household economic strengthening and HIV prevention/treatment, ASPIRES was prepared for the task.

The challenges, however, came in the particulars of the evaluation scenario. The research team set its mind to identifying the best possible design to fulfill the request. This is to say, we sought out a recipe for lemonade amidst these somewhat lemony conditions.

Riddle me this: How many interviews (or focus groups) are enough?

Tags: ,

The first two posts in this series describe commonly used research sampling strategies and provide some guidance on how to choose from this range of sampling methods. Here we delve further into the sampling world and address sample sizes for qualitative research and evaluation projects. Specifically, we address the often-asked question: How many in-depth interviews/focus groups do I need to conduct for my study?

A pathway for sampling success

Tags: ,

The credibility and usefulness of our research and evaluation findings are inextricably connected to how we select participants. And, let’s admit it, for many of us, the process of choosing a sampling strategy can be drier than a vermouth-free martini, without any of the fun. So, rather than droning on comparing this or that sampling strategy, we present a relatively simple sampling decision tree.

Gearing up to address attrition: Cohort designs with longitudinal data

Tags: , , , , ,

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Learning about focus groups from an RCT

Tags: , , ,

In my previous job at 3ie I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. use an RCT to compare the performance of individual interviews against focus groups for collecting certain data.

Emojis convey language, why not a sampling lesson too?

Tags: ,

To help folks build stronger sampling plans for their research and evaluation projects, we present a series of three sampling posts. This first blog post explains sampling terminology and describes the most common sampling approaches using emoji-themed graphics. Ready to get started? Sit down, hold on to your hats and glasses, and enjoy the sampling ride!