Midline research reveals promising results for mLabour health application

 
Tags: , , ,

April 1st, 2018 marked seven months since the launch of mLabour — a comprehensive labour management tool built on CommCare — in three private health facilities in Tanzania. With this experience in hand, Dimagi and FHI 360 are conducting an ongoing evaluation of the mLabour mobile application to assess its impact on clinical adherence, usability and patient satisfaction.

The midline results are now in, and we’re excited to share that the data indicates an overall improvement in adherence to clinical protocols, as well as an exceptionally high uptake of the tool.

New directions in portfolio reviews

 
Tags: , , , ,

A funder portfolio review is an evaluation of a set of programs or activities that make up a portfolio, typically defined by a sector or a place. Portfolio reviews take many forms, but the purpose is generally the same: to take stock and reflect on activities or investments in a particular area of programming. They are often requested by funders to answer straightforward questions about what’s working and what isn’t working in order to figure out what to do next. Portfolio reviews typically include a desk review of program documents and often include other data collection such as interviews and focus groups. Because funders use portfolio reviews to make strategic decisions about programmatic directions and resource allocations, innovations in this type of evaluation can bring large benefits. In this post, we briefly introduce two new directions for portfolio reviews.

3 women leading the charge in ICT4D research

 
Tags: , , ,

It’s no secret that the technology sector is riddled with major gender disparities. In the United States, discrepancies in employment and pay are so widespread that tech firms and the government alike regularly commission reports to evaluate why women comprise less than a quarter of the tech workforce and how this stifles growth. Couple the gender imbalances in the tech sphere with those in the research world and it’s not hard to conceive of the challenges faced by women conducting research in the information and communication technologies for development (ICT4D) field. As the 10th conference on ICT4D in Lusaka, Zambia, approaches in May, I’d like to take a moment to highlight the work of several incredibly talented women powering the evidence base for ICT4D.

Through an FHI 360-funded learning agenda project, Annette Brown and I recently created an evidence map that identifies and categorizes impact evaluations across the broad and multi-sectoral beast we term ICT4D. We used a systematic review approach to identify and code 254 impact evaluations across 11 ICT4D intervention types, such as digital identity and technology-assisted learning, that provide evidence in nine sectors. Researchers in the field have been busy – in the last five years the total number of publications providing rigorous evidence in ICT4D increased 311 percent. Below, I take a look at three pieces of evidence from the map and the women behind the work.

Quarterly recap of FHI 360’s blog on research and evaluation, January–March 2018

 
Tags: , ,

We are officially knee-deep into 2018, although many of us are still waiting for Spring! The R&E Search for Evidence blog already has 13 new posts written by FHI 360 thought leaders and focused on innovative tools, research and evaluation methodologies, and new evidence related to some of our most pressing human development needs. Here are some of the highlights.

The science of humanitarian response in crisis settings

 
Tags: , , , ,

Humanitarian needs around the globe have risen dramatically over the past decades and today we are arguably witnessing the greatest level of human suffering that the world has experienced in the past 70 years. Although the humanitarian response system is saving more lives, preventing more illness, caring for more wounded, and feeding more people than ever, we are struggling to keep pace with the growing demands of more complex crises and the changing nature of conflict. The Centre for Research on the Epidemiology of Disasters (CRED) estimated that more than 172 million people were affected by armed conflict in 2012. UNHCR estimates that there are currently 65.6 million forcibly displaced people, of which 22.5 are refugees, who have crossed an international border. Moreover, CRED estimated that from 1994 to 2013, 218 million people on average each year were affected by destructive natural disasters around the world.

The medical journal Lancet published a special series of articles in 2017 to draw attention to the gaps in knowledge for addressing health needs in humanitarian crises. We summarize a few of these articles here and conclude that more implementation science and improved data systems will both be important for filling the knowledge gaps.

Gearing up to enhance college readiness for underserved students: Insights from a capacity building workshop

 
Tags: , , , ,

I was fortunate to attend the National Council for Community and Education Partnerships (NCCEP)/ Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) 2018 Capacity Building Workshop in Las Vegas, Nevada. The three-day workshop brought together GEAR UP community partners, school and district administrators, and researchers from across the country for professional learning.

Each day of the workshop began with a keynote address. Linda Cliatt-Wayman, principal and renowned school-turnaround expert, spoke of her motivational story to transform Philadelphia schools on the Persistently Dangerous List. Greg Simon, President of the Biden Cancer Initiative, stressed the importance of communication, shared data, and coming together to share practices and strategies. Natalie Spiro, Founder and President of Drum Café West Coast, led the audience through an interactive drumming performance illustrating GEAR UP’s collective voice and unity of purpose. Even an Elvis impersonator came to liven things up!

But what I found most informative and inspiring were the seminars after each keynote. It was during these sessions that peers from various disciplines from across the country could discuss their successes and difficulties, share strategies and practices, and more importantly share alternate perspectives on common issues and topics. In this post, I share some of my personal insights from those discussions.

Using a structured approach to maximize return on clinical trial research investments

 
Tags: , , ,

With scientists, funders and policy makers becoming increasingly concerned about the stability of government investment in health research, scientists must identify ways to maximize scientific progress in the presence of potential narrowed funding. Here, I will describe a structured approach to ensure research investments produce maximal results through harnessing the information already available in rich clinical trial data sets.

Strengthening capacity to track, store and use data effectively at Catholic and independent schools

 
Tags: , , ,

I’ll be the first to confess that my research most often focuses on the public school system, with the inherent but weak assumption that strong practices and strategies in public schools can loosely apply to non-public schools (i.e., Catholic and independent schools). But as an education researcher, I’m intimately aware that setting and populations matter and just because an intervention works in a public school doesn’t mean it will work in a nearby independent school.

So, I was pleasantly surprised when I and a few of my colleagues recently had the opportunity to work with a network of Catholic and independent schools in Minnesota to 1) support the integration of assessment data into ongoing school improvement processes, and 2) promote best practices in data collection, assessment and use. I was excited to be able to directly apply my experience and learn about their data issues and concerns. Below I highlight some of my lessons learned in working with the schools and across the network, in the hopes that it will assist other Catholic and independent schools as they seek to use data to inform their practice.

Implementation research: The unambiguous cornerstone of implementation science

 
Tags: , ,

Three years ago, I wrote that there was no clear consensus on the definition of implementation science in global health. Today we are no closer to agreement. In 2015, Thomas Odeny and colleagues published a review that showed 73 unique definitions, and the Implementing Best Practices Initiative conducted a survey that showed no consensus on a definition across 27 international organizations. To confuse matters more, the term is used interchangeably with implementation research; operations research; monitoring, evaluation and learning; real-world research; and other non-research approaches focused on refining implementation strategies.

Since implementation science is defined so broadly and because no one can agree on a definition, what is the key to making sense of it all? In my work, I focus on the most well-defined, understandable sub-domain of implementation science: implementation research. I argue that it’s the heart and soul of implementation science. In this blog post, I’ll define implementation research and outline why it makes for such a useful concept.

5 features of a monitoring, evaluation and learning system geared towards equity in education

 
Tags: , , , , ,

A great accomplishment arising from the era following 1990’s World Declaration on Education for All in Jomtein, Thailand, is recognition of the gender gap in education, and the mandate for sex-disaggregated reporting from funders and multilateral agencies. Data on the dramatic access and outcome disparities between male and female learners created demand for programming focused on gender inequity. Twenty-seven years after Jomtien, there is a substantial amount of evidence on solutions that build gender equity in education, and on how education systems need to adapt to help girls and boys overcome gender-related institutional barriers.

The Education Equity Research Initiative, a collaborative partnership led by FHI 360 and Save the Children, seeks to create the same dynamic around other aspects of inequity in education – be it poverty, ethnic or racial disadvantage, migration status, or disability. As a community, we create frameworks, modules, and tools, so that little by little the reams of data that get produced include a consistent set of questions around equity of program participation and equity of outcomes.

My previous blog post speaks to the need to be deliberate in building a monitoring, evaluation and learning system that generates the data and analysis that help answer the question: are we improving education equity through our programming and policy? But how do we operationalize equity in education, in the context of education development programming? In Mainstreaming Equity in Education, a paper commissioned by the International Education Funders Group, our collaborative begins by recognizing that an equity-oriented monitoring, evaluation and learning (MEL) system around a program or set of interventions has an essential purpose not just to produce data on scope and coverage, but to allow for depth of understanding around who benefits and doesn’t, and offer actionable information on what to do about it. Here I outline five features that describe such a learning system.

To achieve equity in education, we need the right data

 
Tags: , , , ,

As we work to realize the Sustainable Development Goals (SDGs) related to education, it is the responsibility of every funding, implementing and research organization internationally to be asking questions about our own contributions to building equity in education. While a great amount of data gets produced in the course of education projects, only a fraction provides the detail that is needed to assess intervention impact on different equity dimensions. At the technical and implementation level, organizations need to capture and use the necessary evidence to understand and respond to inequity in education provision and outcomes.

To do that, we need to be deliberate in building monitoring, evaluation and learning systems that generate the data and analysis that help answer the question: are we improving education equity through our programming and policy? Disaggregated data are the first step to understanding who is left behind in obtaining a quality education for successful and productive adulthood. My recent paper, Mainstreaming Equity in Education, outlines key issues and challenges that need to be addressed around equity in education, and provides a way forward for mainstreaming equity-oriented programming and data analysis. In this blog post, I show how disaggregated data can make a difference to understanding impacts. I then provide evidence that, unfortunately, such disaggregated data are rarely collected.

Faster, cheaper and safer: Do UAVs live up to the hype?

 
Tags: , , ,

Unmanned aerial vehicles (UAVs) – commonly called drones – have captured the imagination of all who know the challenges of last-mile delivery. Proponents argue that they’ll make delivery faster, cheaper and safer. Being able to transport critical supplies to remote areas faster and for less cost without sacrificing quality is the Holy Grail of many development programs. Yet, there is very little evidence demonstrating whether UAVs live up to the hype of faster, cheaper and safer. Moreover, can they do it without sacrificing quality?

Many UAV flights have been conducted, but very few have shared details about how these projects were implemented, what they cost, how they would integrate with the health system, what impact they are having on outcomes, or what lessons have been learned. Without this information, how will decision makers know if they’re likely to be useful or not? Below I describe some of the delivery UAV research already available that I think is useful for decision makers right now.

Moving from indicators of facility coverage and use toward capability to reduce maternal mortality

 
Tags: , ,

A primary indicator that tracked progress toward reducing maternal mortality prior to the Sustainable Development Goals (SDGs) was the percentage of women delivering with a skilled birth attendant. The assumption was that skilled attendants would ensure women receive quality, evidence-based services. It is true that more women are delivering with skilled attendants now than in 1990, and that more deliveries are taking place in health care facilities; it is also true that ratios of maternal mortality have decreased. Yet, the relationship between increased facility deliveries and reduced mortality within countries is mixed. Why is that? One explanation could be that a quality gap remains.

To understand this better, we need to move away from relying on one-dimensional indicators of coverage and use toward indicators that more adequately capture the complexity of facility capability and quality. This will help the maternal health community better track changes at health facilities and support national and subnational entities to identify and target needed interventions. Together with Oona Campbell of the London School of Hygiene and Tropical Medicine (LSHTM) and colleagues, our team analyzed data from 50 countries in an article in The Lancet Maternal Health Series to characterize the availability of critical infrastructure and services where women deliver. Here I present some of those findings that are also included in my lecture that is part of a free online Maternal Health Series course developed by LSHTM.

Quarterly recap of FHI 360’s blog on research and evaluation, October–December 2017

 
Tags: ,

As we close out the year 2017, I want to take a few moments to highlight the 16 posts from our blog this quarter. We feature posts from FHI 360 thought leaders writing about new and innovative evidence, research and evaluation practice, and analysis of methodologies used to better address global development needs.

My tribute to Peter Lamptey’s lifelong contributions to global health

 
Tags: , ,

Known around the world, Prof. Peter Lamptey is a global health champion in any light. Many of you may know him from his early involvement in the global HIV response or from his fight to raise public awareness of noncommunicable diseases (NCDs). I first heard Prof. Lamptey speak about the role of laboratory science in the NCD response at a conference plenary hosted by the African Society for Laboratory Medicine, my former employer. A compelling talk for sure, but notably his plenary was also my first significant introduction to FHI 360’s research.

Fast forward several years. I’m now editor of the very FHI 360 research blog you are reading, and Prof. Lamptey retires from FHI 360 this month. In a selfishly full-circle moment for me, I want to add my tribute to Prof. Lamptey’s immeasurable contributions to global health. Not with a speech or a party (though there have been those too), but with a blog post highlighting a few of his evidence-centered publications. To help celebrate nearly four decades of Prof. Lamptey’s accomplishments, here are three of those publications that I find interesting.

5 lessons for using youth video diaries for monitoring and evaluation

 
Tags: , , , ,

How do you measure the process of change that young people undergo as they engage with a program as part of that program’s monitoring and evaluation (M&E)? The Sharekna project in Tunisia uses youth video diaries to gain insight into the transformations that youth make as they develop resilience against external stresses like violent extremism. In this blog post, I provide five lessons from our Sharekna project to guide future M&E and research activities using or considering the use of youth video diaries.

Five things you can do today to participate in open scholarship

 
Tags: ,

October 23–29 was 2017’s Open Access Week, and I was fortunate to attend the Open Scholarship for the Social Sciences symposium (O3S), one of the hundreds of events around the world to promote open access, with the goal to “increase the impact of scientific and scholarly research.” Open scholarship, as defined by the Association of Research Libraries, “encompasses open access, open data, open educational resources, and all other forms of openness in the scholarly and research environment while also changing how knowledge is created and shared.” The symposium brought together professors, graduate students, librarians, archivists, and researchers from the government and non-profit sectors. Jeff Spies, co-founder of the Center for Open Science, gave the keynote lecture on the second day, and one of his recommendations is that scholars new to open scholarship should start incrementally. Building on his suggestions, I offer five things you can do today to participate in open scholarship.

Seizing an opportunity to collect user experience data

 
Tags: , , ,

Contraceptive clinical trials routinely collect vast amounts of data, but what new data can we collect about method acceptability during this research stage? If a method has reached the clinical trial phase, we’d hope formative acceptability research was already conducted to inform its development and to determine if a potential market exists. At this point in the game, few changes can be made to a method based on acceptability findings… so what’s left to learn?

Hypothetically speaking… If we build it, will they come?

 
Tags: , , , ,

Contraceptive product development efforts, to date, have largely been premised on the notion that if we build it, they will come. Primary attention has been paid to making products that work, with the assumption that if women want to avoid pregnancy, they will use them. While the desire to avoid pregnancy is an extremely powerful motivator, it is not enough. For many women, the fear of contraceptive side effects or the challenge associated with accessing and using contraceptives is greater than the burden of another pregnancy.

Some argue that to improve uptake and continuation rates, we need to improve provider counseling around contraceptive side effects and address socio-cultural barriers, such as inequitable gender norms, that prevent women from using contraceptives. These efforts – while essential – are still insufficient. Even the most informed and empowered women can have unintended pregnancies when they don’t have access to acceptable contraceptives – methods that meet their particular needs in their particular life stage and context.

As researchers, how do we shift the model of contraceptive development to focus first on what users want from an ideal contraceptive?

FHI 360’s R&E Search for Evidence quarterly highlights

 
Tags: ,

The Research and Evaluation Strategic Initiative team published 14 posts during the last quarter in our blog, R&E Search for Evidence. For those not familiar with our blog, it features FHI 360 thought leaders who write about research and evaluation methodology and practice, compelling evidence for development programming, and new findings from recent journal publications. We have published 31 original posts to date! In this post, I will summarize our most recent posts and highlight some of my favorites.

Paper-based data collection: Moving backwards or expanding the arsenal?

 
Tags: , , ,

Considerable effort has gone into perfecting the art of tablet data collection, which is the method typically used to collect data for evaluating education programs. The move away from paper has been a welcome shift, as for many research and evaluation professionals, paper conjures images of junior staff buried under boxes of returned questionnaires manually entering data into computers. Indeed, when our team recently began experimenting with paper-based data collection in our education projects, one colleague with decades of experience remarked warily, “It just seems like we’re moving backwards here!”

Improvements in the software, however, allow us to merge new technology with “old school” methods. Digital scanners can now replace manual data entry, powered by software that is able to read completed questionnaires, and quickly format responses into a data set for subsequent analysis. Our team has been experimenting with a new digital scanning software called Gravic to easily and quickly enter data from paper-based surveys. The Gravic digital scanning tool introduces flexibility and opens a new option for data collection across our projects, but not without some drawbacks. In this post, we make the case for paper surveys combined with the Gravic software and then review the drawbacks.

Four tips for turning big data into big practice

 
Tags: , ,

Thanks to Annette Brown’s brilliant post last month, we now know what big data and data analytics are. Fantastic! The next question is: so what? Does having more data, and information from that data, mean more impact?

I’m lucky enough to be part of the Research Utilization Team at FHI 360, where it’s my job to ask (and try to answer) these kinds of questions. The goal of research utilization (also known by many other names) is to use research – and by implication data – to make a real difference, by providing the right information, to the right people, at the right time, in ways they can understand and be supported over time to use.

So, without further ado, I present to you four practical tips for turning BIG data into BIG practice.

Book Review: Rigor Mortis – Harris says the rigor in science is dead

 
Tags: ,

Richard Harris is a long-time, well-regarded science reporter for National Public Radio, so one has to wonder how he (or the publisher) came up with the title of his new book on the current state of biomedical science: “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions.” Why is it that so many non-fiction books these days have a short, dramatic title intended to catch your eye on an airport bookrack, followed by a subtitle with an alarming description suited for a checkout-line tabloid? Perhaps I just answered my question. Rigor Mortis is itself a play on words: the medical term refers to the stiffness of a body observed in death; here it indicates that rigor in science is dead. I agree with Harris that there are some fundamental issues in the practice of science that need correction, but it would be unfortunate if Harris’s criticisms are used in support of a retreat from science.

Mobile-based surveys: Can you hear me now?

 
Tags: , ,

The technologies and processes we now have at our disposal to locate individuals and populations, push information to them, and gather information from or about them are being developed and refined at break-neck speed. Tools utilizing mobile technologies alone – voice services, SMS, Interactive Voice Recognition (IVR), Unstructured Supplementary Service Data (USSD), location-based services, data-based survey apps, chatbots – have introduced new opportunities to reduce the time, cost, uncertainty and risk in gathering data and feedback. As mobile coverage and access have expanded globally, governments, marketing firms, research organizations and international development actors alike have been iterating on approaches for using mobile-based surveys in their initiatives and programs. This post presents key takeaway lessons regarding the methodology, feasibility and suitability of using mobile surveys based on experience from our Mobile Solutions, Technical Assistance and Research project (mSTAR) in Mozambique.

Big data and data analytics: I do not think it means what you think it means

 
Tags: ,

With so many players latching on to the idea of big data these days, it is inconceivable that everyone has the same definition in mind. I’ve heard folks describe big data as just being the combination of existing data sets while others don’t consider data to be big until there are hundreds of thousands of observations. I’ve even seen the idea that big data just means the increasing availability of open data. There is a similar challenge with data analytics. On one end of the spectrum, data analytics is just data analysis, but with a cooler name. On the other, data analytics involves big data (really big data) and machine learning. I needed to get a grasp on the various terms and concepts for my work, so I thought I’d share some of what I learned with you. Prepare to learn.

Improving the evaluation of quality improvement

 
Tags: , , ,

The use of quality improvement approaches (known as “QI”) to improve health care service outcomes has spread rapidly in recent years. Although QI has contributed to the achievement of measurably significant results as diverse as decreasing maternal mortality from post-partum hemorrhage to increasing compliance with HIV standards of care, its evidence base remains questioned by researchers. The scientific community understandably wants rigorously designed evaluations, consistency in results measurement, proof of attribution of results to specific interventions, and generalizability of findings so that evaluation can help to elevate QI to the status of a “science”. However, evaluation of QI remains a challenge and not everyone agrees on the appropriate methodology to evaluate QI efforts.

In this post, we begin by reviewing a generic model of quality improvement and explore relevant evaluation questions for QI efforts. We then look at the arguments made by improvers and researchers for evaluation methods. We conclude by presenting an initial evaluation framework for QI developed at a recent international QI conference.

Applying the power of partnership to evaluation of a long-acting contraceptive

 
Tags: , ,

A long-acting, highly effective contraceptive method called the levonorgestrel intrauterine system (LNG-IUS) was first approved for use almost thirty years ago. Since then, it has become popular and widely used in high-income countries. However, until recently, the high cost of existing products has limited availability of the method in low-resource settings. Now, new and more affordable LNG-IUS products are becoming available. In 2015, USAID convened a new working group comprised of a diverse group of donors, manufacturers, research and service delivery partners to help accelerate introduction of the method. Through this platform, FHI 360 and other members contributed to the development of a global learning agenda – a series of research questions that donors and implementing agencies agreed are priorities to evaluate the potential impact of the LNG-IUS. Working group members then implemented a simple but innovative approach to making limited research dollars go farther in addressing the learning agenda questions.

Photo credit: Garth Cripps/Blue Ventures; used with permission

Research on integrated development: These are a few of my favorite things

 
Tags: , , ,

You may have recently noticed an uptick in conversations within development circles on this underlying theme: A full realization of the new Sustainable Development Goals (SDGs) requires critical changes in what we do based on understanding the significant linkages between social, economic and environmental sectors. Intuitively, that seems fairly sensible. These linkages suggest that we should be using integrated approaches. But what do we know about the effectiveness of intentionally integrated approaches to development? In this post, I share a few of my very favorite examples of research that provide evidence on the effectiveness of integrated approaches.

Why should practitioners publish their research in journals?

 
Tags: ,

My team recently encouraged a colleague who is collecting and analyzing data in one of our projects to submit the analysis and results to a journal. His response was “why?” He argued that his audience is not reading journals, that there are other mechanisms for getting feedback, and that since he is a technical person and not a researcher, he doesn’t need citations. My colleague believes publishing in other ways is equally valued. This post is my attempt to convince him, and you, that practitioners who are conducting research should publish in journals.

7 takeaways from changes in US education grant programs

 
Tags: , , ,

I recently had the opportunity to attend a workshop on the U.S. Department of Education’s (ED) new Education Innovation and Research (EIR) grant competition. EIR is the successor to the Investing in Innovation (i3) grant program, which invested approximately $1.4 billion through seven competitions from 2010 to 2016 to develop, validate and scale-up evidence-based programs in education. Like i3, EIR implements a tiered award structure to support programs at various levels of development. This blog post summarizes my seven takeaway points from the workshop. These seven points highlight the main changes in the transition from i3 to EIR.

Evaluation ethics: Getting it right

 
Tags: , ,

My personal interest in evaluation ethics goes back to my days at MDRC, where I was responsible for developing survey questions and accompanying protocols to capture domestic violence among mothers who participated in a DHHS-funded welfare-to-work program called JOBS (Job Opportunity and Basic Skills). MDRC was about three and a half years into a five-year evaluation of JOBS when our program officer asked us to include questions specifically about domestic violence in the next wave of our survey. As I recall, no one wanted to touch this – too sensitive, too volatile, too many ethical loopholes to jump through – so, as the lowest rung in the food chain at the time, I was given the task. With that, I entered the world of evaluation ethics where I learned quickly the challenges in getting it right, and the consequences of getting it wrong.

Mining for development gold: Using survey data for program design

 
Tags: , , , ,

As global health resources become more scarce and the prevalence of international crises increase, it is more important than ever that we design and target development programs to maximize our investments. The complexity of the applicable social, political and physical environments must be taken into consideration. Formative research can help us to understand these environments for program design, but formative research is often skipped due to budgetary, time or safety concerns that constrain the collection of new data. What many overlook is the vast untapped potential of existing household survey data that are rigorously collected, clean and freely available online around the world. By mining existing survey data, we can conduct the formative research necessary to maximize development impact.

Null results should produce answers, not excuses

 
Tags: , , ,

I recently served on a Center for Global Development (CGD) panel to discuss a new study of the effects of community-based education on learning outcomes in Afghanistan. (Burde, Middleton, and Samii 2016) This exemplary randomized evaluation finds some important positive results. But the authors do one thing in the study that almost all impact evaluation researchers do – where they have null results, they make, what I call for the sake of argument, excuses.