Call for papers: Optimizing the impact of key population programming across the HIV cascade

 
Tags:

Key populations – including men who have sex with men, sex workers, transgender people, and people who inject drugs – shoulder a disproportionate burden of HIV. UNAIDS estimates that between 40 and 50 percent of all new HIV infections among adults worldwide occur in these key populations and among their sex partners. Reaching members of these communities with evidence-based interventions that improve their access to and uptake of services across the HIV prevention, care, and treatment cascade is essential to achieving the UNAIDS 90-90-90 goals. In this post, I highlight a new call for papers that will focus on new evidence and data-driven strategies for improving key population programming across the HIV cascade.

Show me the evidence: Cultivating knowledge on governance and food security

 
Tags:

I recently participated in a salon on integrating governance and food security work to enhance development outcomes. Convened by the LOCUS coalition and FHI 360, the salon gathered experts in evaluation, governance and food security to review challenges and best practices for generating evidence and knowledge. A post-salon discussion recorded with Annette Brown and Joseph Sany speaks to the gaps in evidence and the need to more accurately measure how governance principles influence food security outcomes.

I came out of the salon conversation thinking that while there was a hunger for evidence, there are still large gaps and significant differences within the literature on things as basic as definitions. That being said, I wanted to dig a bit more into what evidence was actually out there and think about what needs to be done to move this budding evidence base forward. In this post, I highlight three pieces of interesting research that contribute to the evidence base on governance and food security integration, and then propose a few suggestions on how to grow that knowledge base.

Seizing an opportunity to collect user experience data

 
Tags:

Contraceptive clinical trials routinely collect vast amounts of data, but what new data can we collect about method acceptability during this research stage? If a method has reached the clinical trial phase, we’d hope formative acceptability research was already conducted to inform its development and to determine if a potential market exists. At this point in the game, few changes can be made to a method based on acceptability findings… so what’s left to learn?

Hypothetically speaking… If we build it, will they come?

 
Tags: ,

Contraceptive product development efforts, to date, have largely been premised on the notion that if we build it, they will come. Primary attention has been paid to making products that work, with the assumption that if women want to avoid pregnancy, they will use them. While the desire to avoid pregnancy is an extremely powerful motivator, it is not enough. For many women, the fear of contraceptive side effects or the challenge associated with accessing and using contraceptives is greater than the burden of another pregnancy.

Some argue that to improve uptake and continuation rates, we need to improve provider counseling around contraceptive side effects and address socio-cultural barriers, such as inequitable gender norms, that prevent women from using contraceptives. These efforts – while essential – are still insufficient. Even the most informed and empowered women can have unintended pregnancies when they don’t have access to acceptable contraceptives – methods that meet their particular needs in their particular life stage and context.

As researchers, how do we shift the model of contraceptive development to focus first on what users want from an ideal contraceptive?

FHI 360’s R&E Search for Evidence quarterly highlights

 
Tags:

The Research and Evaluation Strategic Initiative team published 14 posts during the last quarter in our blog, R&E Search for Evidence. For those not familiar with our blog, it features FHI 360 thought leaders who write about research and evaluation methodology and practice, compelling evidence for development programming, and new findings from recent journal publications. We have published 31 original posts to date! In this post, I will summarize our most recent posts and highlight some of my favorites.

Exploring the parameters of “it depends” for estimating the rate of data saturation in qualitative inquiry

 
Tags: ,

In an earlier blog post on sample sizes for qualitative inquiry, we discussed the concept of data saturation – the point at which no new information or themes are observed in the data – and how researchers and evaluators often use it as a guideline when designing a study.

In the same post, we provided empirical data from several methodological studies as a starting point for sample size recommendations. We simultaneously qualified our recommendations with the important observation that each research and evaluation context is unique, and that the speed at which data saturation is reached depends on a number of factors. In this post, we explore a little further this “it depends” qualification by outlining five research parameters that most commonly affect how quickly/slowly data saturation is achieved in qualitative inquiry.

Paper-based data collection: Moving backwards or expanding the arsenal?

 
Tags: ,

Considerable effort has gone into perfecting the art of tablet data collection, which is the method typically used to collect data for evaluating education programs. The move away from paper has been a welcome shift, as for many research and evaluation professionals, paper conjures images of junior staff buried under boxes of returned questionnaires manually entering data into computers. Indeed, when our team recently began experimenting with paper-based data collection in our education projects, one colleague with decades of experience remarked warily, “It just seems like we’re moving backwards here!”

Improvements in the software, however, allow us to merge new technology with “old school” methods. Digital scanners can now replace manual data entry, powered by software that is able to read completed questionnaires, and quickly format responses into a data set for subsequent analysis. Our team has been experimenting with a new digital scanning software called Gravic to easily and quickly enter data from paper-based surveys. The Gravic digital scanning tool introduces flexibility and opens a new option for data collection across our projects, but not without some drawbacks. In this post, we make the case for paper surveys combined with the Gravic software and then review the drawbacks.

Four tips for turning big data into big practice

 
Tags:

Thanks to Annette Brown’s brilliant post last month, we now know what big data and data analytics are. Fantastic! The next question is: so what? Does having more data, and information from that data, mean more impact?

I’m lucky enough to be part of the Research Utilization Team at FHI 360, where it’s my job to ask (and try to answer) these kinds of questions. The goal of research utilization (also known by many other names) is to use research – and by implication data – to make a real difference, by providing the right information, to the right people, at the right time, in ways they can understand and be supported over time to use.

So, without further ado, I present to you four practical tips for turning BIG data into BIG practice.

The science of beating HIV and AIDS

 
Tags:

The International AIDS Society (IAS) Conference on HIV Science (IAS 2017) in Paris last week brought together over 6,000 scientists, clinicians, public health practitioners and officials to review the state of the science intended to control and eventually end the HIV/AIDS epidemic. The central thrust of the global effort to control the epidemic is achieving the 90-90-90 targets set by the Joint United Nations Programme on HIV/AIDS (UNAIDS). These targets state that, by 2020, 90% of those living with HIV know their status, 90% of known HIV-positive individuals receive sustained antiretroviral therapy (ART), and 90% of individuals on ART have durable viral suppression. We know that HIV-infected persons with viral suppression, while not cured, do not transmit HIV infection – hence the focus on treatment, which is prevention.

While some countries have made encouraging progress, we are far short of the global 90-90-90 targets, and worse, there were 1.8 million new HIV infections in 2016. We need science to inform the way forward to reach the targets. In this post, I will report on some of the conference presentations around two major themes: 1) generating and applying evidence to optimize the use of current tools, and 2) developing new and improved methods for HIV prevention, care and treatment.

Book Review: Rigor Mortis – Harris says the rigor in science is dead

 
Tags:

Richard Harris is a long-time, well-regarded science reporter for National Public Radio, so one has to wonder how he (or the publisher) came up with the title of his new book on the current state of biomedical science: “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions.” Why is it that so many non-fiction books these days have a short, dramatic title intended to catch your eye on an airport bookrack, followed by a subtitle with an alarming description suited for a checkout-line tabloid? Perhaps I just answered my question. Rigor Mortis is itself a play on words: the medical term refers to the stiffness of a body observed in death; here it indicates that rigor in science is dead. I agree with Harris that there are some fundamental issues in the practice of science that need correction, but it would be unfortunate if Harris’s criticisms are used in support of a retreat from science.

Teasing apart stigma and knowledge as barriers to HIV testing: A study with young Black adults in Durham, North Carolina

 
Tags:

What experiences do young Black adults in Durham, North Carolina, have with HIV testing? And what influence does stigma play on those experiences? To answer these questions, my co-authors and I recently published the results of a community-based participatory research (CBPR) study: Relationship between HIV knowledge, HIV-related stigma, and HIV testing among young Black adults in a southeastern city. Our cross-sectional survey examined barriers, facilitators and contributors to HIV testing. This blog post summarizes our findings and provides guidance on HIV prevention strategies.

Mobile-based surveys: Can you hear me now?

 
Tags:

The technologies and processes we now have at our disposal to locate individuals and populations, push information to them, and gather information from or about them are being developed and refined at break-neck speed. Tools utilizing mobile technologies alone – voice services, SMS, Interactive Voice Recognition (IVR), Unstructured Supplementary Service Data (USSD), location-based services, data-based survey apps, chatbots – have introduced new opportunities to reduce the time, cost, uncertainty and risk in gathering data and feedback. As mobile coverage and access have expanded globally, governments, marketing firms, research organizations and international development actors alike have been iterating on approaches for using mobile-based surveys in their initiatives and programs. This post presents key takeaway lessons regarding the methodology, feasibility and suitability of using mobile surveys based on experience from our Mobile Solutions, Technical Assistance and Research project (mSTAR) in Mozambique.

Don’t waste evidence on the youth! Recent data highlights education and employment trends

 
Tags:

A recent New York Times article describes a major contemporary challenge facing governments: the world has too many young people. A quarter of the world’s population is young (ages 10-24), and the majority live in developing countries. Policy makers are struggling with high levels of youth unemployment in every country, but a key challenge in developing countries has been a lack of data on education and employment characteristics. To fill this evidence gap, FHI 360’s Education Policy and Data Center (EPDC) recently added country-level Youth Education and Employment profiles to the resources available on our website. In this post, I describe the data and how they were collected, and I give some examples of how these data can be used to inform policy making and program design.

LINKAGES research digest highlights ability of young key populations to access and remain in the HIV care cascade

 
Tags:

On a quarterly basis, the LINKAGES project releases a research digest comprised of the latest peer-reviewed article abstracts related to HIV and key populations (KPs) – sex workers, men who have sex with men (MSM), transgender people, and people who inject drugs. LINKAGES is PEPFAR and USAID’s largest global project dedicated to using evidence-based approaches for reducing HIV transmission among KPs and improving their enrollment and retention in care. KPs have the highest risk of contracting HIV and often face formidable barriers to accessing prevention, care, and treatment services. The research digest keeps implementers and researchers up to date on the rapidly expanding evidence base pertaining to HIV services for KPs on a global scale. So, what did we learn about young KPs and HIV in the past quarter?

Don’t spin the bottle, please: Challenges in the implementation of probability sampling designs in the field, Part II

 
Tags:

In part I, I reviewed how the principles of probability samples apply to household sampling for valid (inferential) statistical analysis and focused on the challenges faced in selecting enumeration areas (EAs). When we want a probability sample of households, multi-stage sampling approaches are usually used, where EAs are selected in the first stage of sampling and households then selected from the sampled EAs in the second stage. (Additional stages may be added if deemed necessary.) In this post, I move on to the selection of households within the sampled EAs. I’ll focus on the sampling principles, challenges, approaches and recommendations.

Sample size is not king: Challenges in the implementation of probability sampling designs in the field, Part I

 
Tags:

So you want a probability sample of households to measure their level of economic vulnerability; or to evaluate the HIV knowledge, attitudes, and risk behaviors of teenagers; or to understand how people use health services such as antenatal care; or to estimate the prevalence of stunting or the prevalence and incidence of HIV. You know that probability samples are needed for valid (inferential) statistical analysis. But you may ask, what does it take to obtain a rigorous probability sample?

Big data and data analytics: I do not think it means what you think it means

 
Tags:

With so many players latching on to the idea of big data these days, it is inconceivable that everyone has the same definition in mind. I’ve heard folks describe big data as just being the combination of existing data sets while others don’t consider data to be big until there are hundreds of thousands of observations. I’ve even seen the idea that big data just means the increasing availability of open data. There is a similar challenge with data analytics. On one end of the spectrum, data analytics is just data analysis, but with a cooler name. On the other, data analytics involves big data (really big data) and machine learning. I needed to get a grasp on the various terms and concepts for my work, so I thought I’d share some of what I learned with you. Prepare to learn.

Turning lemons into lemonade, and then drinking it: Rigorous evaluation under challenging conditions

 
Tags: ,

In early 2014, USAID came to the ASPIRES project with a challenge. They requested that our research team design and implement a prospective quantitative evaluation of a USAID-funded intervention in Mozambique. The intervention centered on a combined social and economic intervention for girls at high risk of HIV infection. As a research-heavy USAID project focused on the integration of household economic strengthening and HIV prevention/treatment, ASPIRES was prepared for the task.

The challenges, however, came in the particulars of the evaluation scenario. The research team set its mind to identifying the best possible design to fulfill the request. This is to say, we sought out a recipe for lemonade amidst these somewhat lemony conditions.

Improving the evaluation of quality improvement

 
Tags: ,

The use of quality improvement approaches (known as “QI”) to improve health care service outcomes has spread rapidly in recent years. Although QI has contributed to the achievement of measurably significant results as diverse as decreasing maternal mortality from post-partum hemorrhage to increasing compliance with HIV standards of care, its evidence base remains questioned by researchers. The scientific community understandably wants rigorously designed evaluations, consistency in results measurement, proof of attribution of results to specific interventions, and generalizability of findings so that evaluation can help to elevate QI to the status of a “science”. However, evaluation of QI remains a challenge and not everyone agrees on the appropriate methodology to evaluate QI efforts.

In this post, we begin by reviewing a generic model of quality improvement and explore relevant evaluation questions for QI efforts. We then look at the arguments made by improvers and researchers for evaluation methods. We conclude by presenting an initial evaluation framework for QI developed at a recent international QI conference.

Applying the power of partnership to evaluation of a long-acting contraceptive

 
Tags:

A long-acting, highly effective contraceptive method called the levonorgestrel intrauterine system (LNG-IUS) was first approved for use almost thirty years ago. Since then, it has become popular and widely used in high-income countries. However, until recently, the high cost of existing products has limited availability of the method in low-resource settings. Now, new and more affordable LNG-IUS products are becoming available. In 2015, USAID convened a new working group comprised of a diverse group of donors, manufacturers, research and service delivery partners to help accelerate introduction of the method. Through this platform, FHI 360 and other members contributed to the development of a global learning agenda – a series of research questions that donors and implementing agencies agreed are priorities to evaluate the potential impact of the LNG-IUS. Working group members then implemented a simple but innovative approach to making limited research dollars go farther in addressing the learning agenda questions.

Photo credit: Garth Cripps/Blue Ventures; used with permission

Research on integrated development: These are a few of my favorite things

 
Tags:

You may have recently noticed an uptick in conversations within development circles on this underlying theme: A full realization of the new Sustainable Development Goals (SDGs) requires critical changes in what we do based on understanding the significant linkages between social, economic and environmental sectors. Intuitively, that seems fairly sensible. These linkages suggest that we should be using integrated approaches. But what do we know about the effectiveness of intentionally integrated approaches to development? In this post, I share a few of my very favorite examples of research that provide evidence on the effectiveness of integrated approaches.

Riddle me this: How many interviews (or focus groups) are enough?

 
Tags:

The first two posts in this series describe commonly used research sampling strategies and provide some guidance on how to choose from this range of sampling methods. Here we delve further into the sampling world and address sample sizes for qualitative research and evaluation projects. Specifically, we address the often-asked question: How many in-depth interviews/focus groups do I need to conduct for my study?

Why should practitioners publish their research in journals?

 
Tags:

My team recently encouraged a colleague who is collecting and analyzing data in one of our projects to submit the analysis and results to a journal. His response was “why?” He argued that his audience is not reading journals, that there are other mechanisms for getting feedback, and that since he is a technical person and not a researcher, he doesn’t need citations. My colleague believes publishing in other ways is equally valued. This post is my attempt to convince him, and you, that practitioners who are conducting research should publish in journals.

Academia takes on global health

 
Tags:

The Consortium of Universities for Global Health (CUGH) held their 8th Annual Conference in Washington DC this week. More than 1,700 people from every corner of the globe gathered for three days to explore how the world’s academic institutions can best contribute to improving global health. This year’s meeting was particularly interesting given the contrast between current prospects for financial support for global health and the trajectory of support over the last 15 years. That contrast made several of the key topics discussed at CUGH even more salient to me.

7 takeaways from changes in US education grant programs

 
Tags:

I recently had the opportunity to attend a workshop on the U.S. Department of Education’s (ED) new Education Innovation and Research (EIR) grant competition. EIR is the successor to the Investing in Innovation (i3) grant program, which invested approximately $1.4 billion through seven competitions from 2010 to 2016 to develop, validate and scale-up evidence-based programs in education. Like i3, EIR implements a tiered award structure to support programs at various levels of development. This blog post summarizes my seven takeaway points from the workshop. These seven points highlight the main changes in the transition from i3 to EIR.

Evaluation ethics: Getting it right

 
Tags:

My personal interest in evaluation ethics goes back to my days at MDRC, where I was responsible for developing survey questions and accompanying protocols to capture domestic violence among mothers who participated in a DHHS-funded welfare-to-work program called JOBS (Job Opportunity and Basic Skills). MDRC was about three and a half years into a five-year evaluation of JOBS when our program officer asked us to include questions specifically about domestic violence in the next wave of our survey. As I recall, no one wanted to touch this – too sensitive, too volatile, too many ethical loopholes to jump through – so, as the lowest rung in the food chain at the time, I was given the task. With that, I entered the world of evaluation ethics where I learned quickly the challenges in getting it right, and the consequences of getting it wrong.

A pathway for sampling success

 
Tags:

The credibility and usefulness of our research and evaluation findings are inextricably connected to how we select participants. And, let’s admit it, for many of us, the process of choosing a sampling strategy can be drier than a vermouth-free martini, without any of the fun. So, rather than droning on comparing this or that sampling strategy, we present a relatively simple sampling decision tree.

How many scientific facts are there about science, technology, and innovation for development?

 
Tags:

There is a lot of excitement these days about science, technology, and innovation and the potential for these activities to contribute to economic and social development globally. The flurry of activity begs the question, how much of this excitement is supported by scientific facts? To help answer this question, the US Global Development Lab at USAID commissioned a project to create and populate a map of the evidence base for science, technology, innovation, and partnerships (STIP). As part of the project, scoping research was conducted to identify not just where there are evidence clusters and gaps, but also where the demand for new evidence by stakeholders is the greatest. In the recently published scoping paper, I and my co-authors analyze the data in the map together with the information from the stakeholders to recommend priorities for investment in new research on STIP. While there is good evidence out there, new research is necessary for strategies and programming to fully benefit from scientific facts. In this post, I briefly describe the research we conducted, summarize a few of the many findings, and list some of our recommendations.

Gearing up to address attrition: Cohort designs with longitudinal data

 
Tags: ,

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Learning about focus groups from an RCT

 
Tags:

In my previous job at 3ie I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. use an RCT to compare the performance of individual interviews against focus groups for collecting certain data.

Mining for development gold: Using survey data for program design

 
Tags: ,

As global health resources become more scarce and the prevalence of international crises increase, it is more important than ever that we design and target development programs to maximize our investments. The complexity of the applicable social, political and physical environments must be taken into consideration. Formative research can help us to understand these environments for program design, but formative research is often skipped due to budgetary, time or safety concerns that constrain the collection of new data. What many overlook is the vast untapped potential of existing household survey data that are rigorously collected, clean and freely available online around the world. By mining existing survey data, we can conduct the formative research necessary to maximize development impact.

Should PEPFAR be renamed the “President’s Epidemiologic Plan for AIDS Relief”?

 
Tags:

The effective use of data within PEPFAR has played a central role in getting us to the point where we can finally talk about controlling the HIV epidemic and creating an AIDS-free generation. PEPFAR’s transition from an emergency approach to one driven by real-time use of granular, site-level data to guide programmatic investments has contributed to achieving epidemic control. In view of this improved use of data, perhaps the “Emergency” in PEPFAR should now be changed to “Epidemiologic.”

Beyond research: Using science to transform women’s lives

 
Tags:

It was a warm spring day in 2011, and eight of my colleagues were helping me celebrate the realization of a long-awaited policy change in Uganda by sipping tepid champagne out of kid-sized paper cups. A colleague asked me, amazed, “How did you guys pull this off? What’s your secret to changing national policy?” I offered up some words about patience, doggedness, and committed team work. My somewhat glib response is still true, but since then I’ve thought a lot about what it takes to get a policy changed.

How to find the journal that is just right

 
Tags:

Goldilocks had it easy. She only had three chairs, three bowls of porridge and three beds to choose from, and the relevant features were pretty straightforward. It is not so easy to pick the right journal for publishing your research. First, there are hundreds of journals to choose from. Second, there are various features that differentiate them. And finally, some journals, like the three bears, are predatory and should be avoided. So how to find the journal that is just right for your research?

Null results should produce answers, not excuses

 
Tags:

I recently served on a Center for Global Development (CGD) panel to discuss a new study of the effects of community-based education on learning outcomes in Afghanistan. (Burde, Middleton, and Samii 2016) This exemplary randomized evaluation finds some important positive results. But the authors do one thing in the study that almost all impact evaluation researchers do – where they have null results, they make, what I call for the sake of argument, excuses.