Seizing an opportunity to collect user experience data

 
Tags: , , ,

Contraceptive clinical trials routinely collect vast amounts of data, but what new data can we collect about method acceptability during this research stage? If a method has reached the clinical trial phase, we’d hope formative acceptability research was already conducted to inform its development and to determine if a potential market exists. At this point in the game, few changes can be made to a method based on acceptability findings… so what’s left to learn?

Hypothetically speaking… If we build it, will they come?

 
Tags: , , , ,

Contraceptive product development efforts, to date, have largely been premised on the notion that if we build it, they will come. Primary attention has been paid to making products that work, with the assumption that if women want to avoid pregnancy, they will use them. While the desire to avoid pregnancy is an extremely powerful motivator, it is not enough. For many women, the fear of contraceptive side effects or the challenge associated with accessing and using contraceptives is greater than the burden of another pregnancy.

Some argue that to improve uptake and continuation rates, we need to improve provider counseling around contraceptive side effects and address socio-cultural barriers, such as inequitable gender norms, that prevent women from using contraceptives. These efforts – while essential – are still insufficient. Even the most informed and empowered women can have unintended pregnancies when they don’t have access to acceptable contraceptives – methods that meet their particular needs in their particular life stage and context.

As researchers, how do we shift the model of contraceptive development to focus first on what users want from an ideal contraceptive?

FHI 360’s R&E Search for Evidence quarterly highlights

 
Tags: ,

The Research and Evaluation Strategic Initiative team published 14 posts during the last quarter in our blog, R&E Search for Evidence. For those not familiar with our blog, it features FHI 360 thought leaders who write about research and evaluation methodology and practice, compelling evidence for development programming, and new findings from recent journal publications. We have published 31 original posts to date! In this post, I will summarize our most recent posts and highlight some of my favorites.

Paper-based data collection: Moving backwards or expanding the arsenal?

 
Tags: , , ,

Considerable effort has gone into perfecting the art of tablet data collection, which is the method typically used to collect data for evaluating education programs. The move away from paper has been a welcome shift, as for many research and evaluation professionals, paper conjures images of junior staff buried under boxes of returned questionnaires manually entering data into computers. Indeed, when our team recently began experimenting with paper-based data collection in our education projects, one colleague with decades of experience remarked warily, “It just seems like we’re moving backwards here!”

Improvements in the software, however, allow us to merge new technology with “old school” methods. Digital scanners can now replace manual data entry, powered by software that is able to read completed questionnaires, and quickly format responses into a data set for subsequent analysis. Our team has been experimenting with a new digital scanning software called Gravic to easily and quickly enter data from paper-based surveys. The Gravic digital scanning tool introduces flexibility and opens a new option for data collection across our projects, but not without some drawbacks. In this post, we make the case for paper surveys combined with the Gravic software and then review the drawbacks.

Four tips for turning big data into big practice

 
Tags: , ,

Thanks to Annette Brown’s brilliant post last month, we now know what big data and data analytics are. Fantastic! The next question is: so what? Does having more data, and information from that data, mean more impact?

I’m lucky enough to be part of the Research Utilization Team at FHI 360, where it’s my job to ask (and try to answer) these kinds of questions. The goal of research utilization (also known by many other names) is to use research – and by implication data – to make a real difference, by providing the right information, to the right people, at the right time, in ways they can understand and be supported over time to use.

So, without further ado, I present to you four practical tips for turning BIG data into BIG practice.

Book Review: Rigor Mortis – Harris says the rigor in science is dead

 
Tags: ,

Richard Harris is a long-time, well-regarded science reporter for National Public Radio, so one has to wonder how he (or the publisher) came up with the title of his new book on the current state of biomedical science: “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions.” Why is it that so many non-fiction books these days have a short, dramatic title intended to catch your eye on an airport bookrack, followed by a subtitle with an alarming description suited for a checkout-line tabloid? Perhaps I just answered my question. Rigor Mortis is itself a play on words: the medical term refers to the stiffness of a body observed in death; here it indicates that rigor in science is dead. I agree with Harris that there are some fundamental issues in the practice of science that need correction, but it would be unfortunate if Harris’s criticisms are used in support of a retreat from science.

Mobile-based surveys: Can you hear me now?

 
Tags: , ,

The technologies and processes we now have at our disposal to locate individuals and populations, push information to them, and gather information from or about them are being developed and refined at break-neck speed. Tools utilizing mobile technologies alone – voice services, SMS, Interactive Voice Recognition (IVR), Unstructured Supplementary Service Data (USSD), location-based services, data-based survey apps, chatbots – have introduced new opportunities to reduce the time, cost, uncertainty and risk in gathering data and feedback. As mobile coverage and access have expanded globally, governments, marketing firms, research organizations and international development actors alike have been iterating on approaches for using mobile-based surveys in their initiatives and programs. This post presents key takeaway lessons regarding the methodology, feasibility and suitability of using mobile surveys based on experience from our Mobile Solutions, Technical Assistance and Research project (mSTAR) in Mozambique.

Big data and data analytics: I do not think it means what you think it means

 
Tags: ,

With so many players latching on to the idea of big data these days, it is inconceivable that everyone has the same definition in mind. I’ve heard folks describe big data as just being the combination of existing data sets while others don’t consider data to be big until there are hundreds of thousands of observations. I’ve even seen the idea that big data just means the increasing availability of open data. There is a similar challenge with data analytics. On one end of the spectrum, data analytics is just data analysis, but with a cooler name. On the other, data analytics involves big data (really big data) and machine learning. I needed to get a grasp on the various terms and concepts for my work, so I thought I’d share some of what I learned with you. Prepare to learn.

Improving the evaluation of quality improvement

 
Tags: , , ,

The use of quality improvement approaches (known as “QI”) to improve health care service outcomes has spread rapidly in recent years. Although QI has contributed to the achievement of measurably significant results as diverse as decreasing maternal mortality from post-partum hemorrhage to increasing compliance with HIV standards of care, its evidence base remains questioned by researchers. The scientific community understandably wants rigorously designed evaluations, consistency in results measurement, proof of attribution of results to specific interventions, and generalizability of findings so that evaluation can help to elevate QI to the status of a “science”. However, evaluation of QI remains a challenge and not everyone agrees on the appropriate methodology to evaluate QI efforts.

In this post, we begin by reviewing a generic model of quality improvement and explore relevant evaluation questions for QI efforts. We then look at the arguments made by improvers and researchers for evaluation methods. We conclude by presenting an initial evaluation framework for QI developed at a recent international QI conference.

Applying the power of partnership to evaluation of a long-acting contraceptive

 
Tags: , ,

A long-acting, highly effective contraceptive method called the levonorgestrel intrauterine system (LNG-IUS) was first approved for use almost thirty years ago. Since then, it has become popular and widely used in high-income countries. However, until recently, the high cost of existing products has limited availability of the method in low-resource settings. Now, new and more affordable LNG-IUS products are becoming available. In 2015, USAID convened a new working group comprised of a diverse group of donors, manufacturers, research and service delivery partners to help accelerate introduction of the method. Through this platform, FHI 360 and other members contributed to the development of a global learning agenda – a series of research questions that donors and implementing agencies agreed are priorities to evaluate the potential impact of the LNG-IUS. Working group members then implemented a simple but innovative approach to making limited research dollars go farther in addressing the learning agenda questions.

Photo credit: Garth Cripps/Blue Ventures; used with permission

Research on integrated development: These are a few of my favorite things

 
Tags: , , ,

You may have recently noticed an uptick in conversations within development circles on this underlying theme: A full realization of the new Sustainable Development Goals (SDGs) requires critical changes in what we do based on understanding the significant linkages between social, economic and environmental sectors. Intuitively, that seems fairly sensible. These linkages suggest that we should be using integrated approaches. But what do we know about the effectiveness of intentionally integrated approaches to development? In this post, I share a few of my very favorite examples of research that provide evidence on the effectiveness of integrated approaches.

Why should practitioners publish their research in journals?

 
Tags: ,

My team recently encouraged a colleague who is collecting and analyzing data in one of our projects to submit the analysis and results to a journal. His response was “why?” He argued that his audience is not reading journals, that there are other mechanisms for getting feedback, and that since he is a technical person and not a researcher, he doesn’t need citations. My colleague believes publishing in other ways is equally valued. This post is my attempt to convince him, and you, that practitioners who are conducting research should publish in journals.

7 takeaways from changes in US education grant programs

 
Tags: , , ,

I recently had the opportunity to attend a workshop on the U.S. Department of Education’s (ED) new Education Innovation and Research (EIR) grant competition. EIR is the successor to the Investing in Innovation (i3) grant program, which invested approximately $1.4 billion through seven competitions from 2010 to 2016 to develop, validate and scale-up evidence-based programs in education. Like i3, EIR implements a tiered award structure to support programs at various levels of development. This blog post summarizes my seven takeaway points from the workshop. These seven points highlight the main changes in the transition from i3 to EIR.

Evaluation ethics: Getting it right

 
Tags: , ,

My personal interest in evaluation ethics goes back to my days at MDRC, where I was responsible for developing survey questions and accompanying protocols to capture domestic violence among mothers who participated in a DHHS-funded welfare-to-work program called JOBS (Job Opportunity and Basic Skills). MDRC was about three and a half years into a five-year evaluation of JOBS when our program officer asked us to include questions specifically about domestic violence in the next wave of our survey. As I recall, no one wanted to touch this – too sensitive, too volatile, too many ethical loopholes to jump through – so, as the lowest rung in the food chain at the time, I was given the task. With that, I entered the world of evaluation ethics where I learned quickly the challenges in getting it right, and the consequences of getting it wrong.

Mining for development gold: Using survey data for program design

 
Tags: , , , ,

As global health resources become more scarce and the prevalence of international crises increase, it is more important than ever that we design and target development programs to maximize our investments. The complexity of the applicable social, political and physical environments must be taken into consideration. Formative research can help us to understand these environments for program design, but formative research is often skipped due to budgetary, time or safety concerns that constrain the collection of new data. What many overlook is the vast untapped potential of existing household survey data that are rigorously collected, clean and freely available online around the world. By mining existing survey data, we can conduct the formative research necessary to maximize development impact.

Null results should produce answers, not excuses

 
Tags: , , ,

I recently served on a Center for Global Development (CGD) panel to discuss a new study of the effects of community-based education on learning outcomes in Afghanistan. (Burde, Middleton, and Samii 2016) This exemplary randomized evaluation finds some important positive results. But the authors do one thing in the study that almost all impact evaluation researchers do – where they have null results, they make, what I call for the sake of argument, excuses.