Midline research reveals promising results for mLabour health application

Tags: , , ,

April 1st, 2018 marked seven months since the launch of mLabour — a comprehensive labour management tool built on CommCare — in three private health facilities in Tanzania. With this experience in hand, Dimagi and FHI 360 are conducting an ongoing evaluation of the mLabour mobile application to assess its impact on clinical adherence, usability and patient satisfaction.

The midline results are now in, and we’re excited to share that the data indicates an overall improvement in adherence to clinical protocols, as well as an exceptionally high uptake of the tool.

Unpacking PLCs: What evidence do we have about professional learning communities and how can we produce more?

Tags: , , ,

Improving teaching quality to ensure that all children in school obtain the skills and knowledge they are meant to acquire has become a key objective internationally. Together with the growing recognition of the need to improve teaching, there is also a realization that the isolation teachers often face is not conducive to collaborative learning and improved teaching practices. In developing countries, many teachers, particularly those in rural areas teaching in multi-grade classrooms, often feel isolated and disconnected from their peers.

Developing countries have recently resorted to alternative models of teacher professional development, such as professional learning communities, or PLCs, to improve teaching quality and promote an approach to teacher development that is both social and contextual. PLCs have become so popular that many education systems in developing countries, as well as education development programs, include a PLC component as part of an overall professional development plan. PLCs enable teachers to cope with isolation, strengthening solidarity, camaraderie and teachers’ self-confidence as professionals. Although PLCs are in vogue and have been recently implemented in many Latin American and African countries, the concept of PLC originated in and has been applied and studied more widely in the context of developed countries, mainly the United States and United Kingdom.

In this post, we further define PLCs and review existing evidence on the effect of PLCs. We then outline FHI 360-funded research that we have initiated to study PLCs in three low- and middle-income countries: Equatorial Guinea, Ghana and Nigeria.

A mixed method research design for understanding how ICT tools assist climate change adaptation

Tags: , , , ,

Research conducted by Ericsson and the Earth Institute on the role of information and communication technology (ICT) in achieving the Sustainable Development Goals concludes that “every goal — from ending poverty and halting climate change to fighting injustice and inequality — can be positively impacted by ICT” (Ericsson, 2015). Projects utilizing ICT for climate change adaptation in developing countries indicate great potential for new technologies, such as mobile phones, and traditional technologies, such as radio broadcasts, to improve data gathering and dissemination of information on adaptation options (Ospina and Heeks, 2010). What is lacking, however, is evidence of the impact of combining multiple technologies with an institutional framework supporting the generation and dissemination of climate and agricultural information. An assessment of the use of multiple ICT — such as mobile phones, FM radio and community loudspeakers — combined with institutional arrangements to support ICT deployment is needed to better tailor the design of ICT interventions for climate change adaptation in developing countries.

In this post, we present the research design for an ongoing study of the Climate Change Adaptation and ICT (CHAI) project in Uganda. Our study investigates how the current approach of CHAI with its multiple ICT tools, institutional arrangements and local-to-national actors contributes to program impact. We intend for the findings of our study to inform the design of information and communication technologies for development (ICT4D) programs in the future.

New directions in portfolio reviews

Tags: , , , ,

A funder portfolio review is an evaluation of a set of programs or activities that make up a portfolio, typically defined by a sector or a place. Portfolio reviews take many forms, but the purpose is generally the same: to take stock and reflect on activities or investments in a particular area of programming. They are often requested by funders to answer straightforward questions about what’s working and what isn’t working in order to figure out what to do next. Portfolio reviews typically include a desk review of program documents and often include other data collection such as interviews and focus groups. Because funders use portfolio reviews to make strategic decisions about programmatic directions and resource allocations, innovations in this type of evaluation can bring large benefits. In this post, we briefly introduce two new directions for portfolio reviews.

What can we learn from fidelity of implementation monitoring models within early grade reading programs?

Tags: , , , , ,

Early grade reading programs have become a focus of significant investment in the international development community in recent years. These interventions often include similar components: the development of mother-tongue teaching and learning materials including structured teacher guides and pupil books; teacher professional development including in-service training, ongoing coaching, and professional learning communities; and community engagement around reading. The theory of change posits that, in combination, these components will lead to improved reading skills for pupils. However, this involves a certain leap of faith, because we don’t usually know what teachers do in their classrooms when the door is closed.

We believe the effectiveness of early grade reading programs requires a clear understanding of the extent to which these programs are implemented according to design at the classroom level. In other words, it requires a clear understanding of the fidelity of implementation (FOI) of the programs, to enable identification of gaps in programming and of steps to improve implementation. Currently, FOI monitoring is central to many early grade reading programs around the world, including smaller pilot programs, mid-sized interventions and programs at scale. The data is viewed as highly useful because it is so actionable – in fact, our experience has shown that governments are often very interested in integrating classroom-level FOI data into their own monitoring systems.

From designing our own FHI 360 FOI monitoring systems, it became clear that there are a number of different models with wide-ranging cost and sustainability implications. In this post, we provide an overview of FOI, describe the FOI monitoring models from two of our own early grade reading projects in Ghana and Nigeria, and outline a research study that aims to see what we could learn from them.

3 women leading the charge in ICT4D research

Tags: , , ,

It’s no secret that the technology sector is riddled with major gender disparities. In the United States, discrepancies in employment and pay are so widespread that tech firms and the government alike regularly commission reports to evaluate why women comprise less than a quarter of the tech workforce and how this stifles growth. Couple the gender imbalances in the tech sphere with those in the research world and it’s not hard to conceive of the challenges faced by women conducting research in the information and communication technologies for development (ICT4D) field. As the 10th conference on ICT4D in Lusaka, Zambia, approaches in May, I’d like to take a moment to highlight the work of several incredibly talented women powering the evidence base for ICT4D.

Through an FHI 360-funded learning agenda project, Annette Brown and I recently created an evidence map that identifies and categorizes impact evaluations across the broad and multi-sectoral beast we term ICT4D. We used a systematic review approach to identify and code 254 impact evaluations across 11 ICT4D intervention types, such as digital identity and technology-assisted learning, that provide evidence in nine sectors. Researchers in the field have been busy – in the last five years the total number of publications providing rigorous evidence in ICT4D increased 311 percent. Below, I take a look at three pieces of evidence from the map and the women behind the work.

Quarterly recap of FHI 360’s blog on research and evaluation, January–March 2018

Tags: , ,

We are officially knee-deep into 2018, although many of us are still waiting for Spring! The R&E Search for Evidence blog already has 13 new posts written by FHI 360 thought leaders and focused on innovative tools, research and evaluation methodologies, and new evidence related to some of our most pressing human development needs. Here are some of the highlights.

Addressing bias in our systematic review of STEM research

Tags: , , , , , , ,

Research is a conversation. Researchers attempt to answer a study question, and then other groups of researchers support, contest or expand on those findings. Over the years, this process produces a body of evidence representing the scientific community’s conversation on a given topic. But what did those research teams have to say? What did they determine is the answer to the question? How did they arrive at that answer?

That is where a systematic review enters the conversation. We know, for example, that a significant amount of research exists exploring gender differences in mathematics achievement, but it is unclear how girls’ math identity contributes to or ameliorates this disparity. In response, we are conducting a systematic review to understand how improving girls’ math identity supports their participation, engagement and achievement in math. This review will assist us in moving from a more subjective understanding of the issue to a rigorous and unbiased assessment of the current evidence to date.

Developing a systematic review protocol requires thoughtful decision-making about how to reduce various forms of bias at each stage of the process. Below we discuss some of the decisions made to reduce bias in our systematic review exploring girls’ math identity, in the hopes that it will inform others undertaking similar efforts.

The science of humanitarian response in crisis settings

Tags: , , , ,

Humanitarian needs around the globe have risen dramatically over the past decades and today we are arguably witnessing the greatest level of human suffering that the world has experienced in the past 70 years. Although the humanitarian response system is saving more lives, preventing more illness, caring for more wounded, and feeding more people than ever, we are struggling to keep pace with the growing demands of more complex crises and the changing nature of conflict. The Centre for Research on the Epidemiology of Disasters (CRED) estimated that more than 172 million people were affected by armed conflict in 2012. UNHCR estimates that there are currently 65.6 million forcibly displaced people, of which 22.5 are refugees, who have crossed an international border. Moreover, CRED estimated that from 1994 to 2013, 218 million people on average each year were affected by destructive natural disasters around the world.

The medical journal Lancet published a special series of articles in 2017 to draw attention to the gaps in knowledge for addressing health needs in humanitarian crises. We summarize a few of these articles here and conclude that more implementation science and improved data systems will both be important for filling the knowledge gaps.

Recent research contributes to ending TB

Tags: , ,

Tuberculosis (TB) is a top 10 global killer. HIV/AIDS no longer tops the list thanks to a global response focused on diagnosis and treatment. TB, however, remains one of the world’s deadliest infectious diseases and its impact is disproportionately felt in low- and middle-income countries. According to the World Health Organization (WHO), more than 95% of TB deaths occur in these resource-limited settings.

Despite the scary statistics, progress has been made over the last few decades. Diagnosis and treatment saved more than 50 million lives between 2000 and 2016. The public health community built on that foundation and adopted the End TB strategy, which aims to end the epidemic by 2030 as part of the Sustainable Development Goals. The strategy uses a blueprint based on three pillars: 1) integrated, patient-centered care and prevention, 2) bold policies and supportive systems, and 3) intensified research and innovation.

Most readers might rightfully think that controlling TB is simply about making sure patients receive their medication, but there are other spokes in the wheel working to create a comprehensive and effective response to the epidemic. These equally important spokes can include things like transporting viable patient samples to the testing laboratory, using appropriate medications in proper doses, or developing shorter treatment regimens. In the spirit of rolling a complete wheel down the road, I want to highlight three innovative research papers that address these sometimes overlooked issues within resource-limited settings.

Gearing up to enhance college readiness for underserved students: Insights from a capacity building workshop

Tags: , , , ,

I was fortunate to attend the National Council for Community and Education Partnerships (NCCEP)/ Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) 2018 Capacity Building Workshop in Las Vegas, Nevada. The three-day workshop brought together GEAR UP community partners, school and district administrators, and researchers from across the country for professional learning.

Each day of the workshop began with a keynote address. Linda Cliatt-Wayman, principal and renowned school-turnaround expert, spoke of her motivational story to transform Philadelphia schools on the Persistently Dangerous List. Greg Simon, President of the Biden Cancer Initiative, stressed the importance of communication, shared data, and coming together to share practices and strategies. Natalie Spiro, Founder and President of Drum Café West Coast, led the audience through an interactive drumming performance illustrating GEAR UP’s collective voice and unity of purpose. Even an Elvis impersonator came to liven things up!

But what I found most informative and inspiring were the seminars after each keynote. It was during these sessions that peers from various disciplines from across the country could discuss their successes and difficulties, share strategies and practices, and more importantly share alternate perspectives on common issues and topics. In this post, I share some of my personal insights from those discussions.

Using a structured approach to maximize return on clinical trial research investments

Tags: , , ,

With scientists, funders and policy makers becoming increasingly concerned about the stability of government investment in health research, scientists must identify ways to maximize scientific progress in the presence of potential narrowed funding. Here, I will describe a structured approach to ensure research investments produce maximal results through harnessing the information already available in rich clinical trial data sets.

Strengthening capacity to track, store and use data effectively at Catholic and independent schools

Tags: , , ,

I’ll be the first to confess that my research most often focuses on the public school system, with the inherent but weak assumption that strong practices and strategies in public schools can loosely apply to non-public schools (i.e., Catholic and independent schools). But as an education researcher, I’m intimately aware that setting and populations matter and just because an intervention works in a public school doesn’t mean it will work in a nearby independent school.

So, I was pleasantly surprised when I and a few of my colleagues recently had the opportunity to work with a network of Catholic and independent schools in Minnesota to 1) support the integration of assessment data into ongoing school improvement processes, and 2) promote best practices in data collection, assessment and use. I was excited to be able to directly apply my experience and learn about their data issues and concerns. Below I highlight some of my lessons learned in working with the schools and across the network, in the hopes that it will assist other Catholic and independent schools as they seek to use data to inform their practice.

Searching for social norms measures related to modern contraceptive use

Tags: , , ,

Social norms are a hot topic in development. While it seems intuitive that social norms can play a key role in influencing people’s behavior (in both negative and positive ways), the jury is still out on the best way for programs to address norms. We do know that in order to provide evidence on the power of norms change to improve behavior, we need effective ways to measure and monitor norms.

In 2017, as part of the USAID-funded Passages Project, we set out to systematically assess what empirical evidence exists around the relationship between social norms and use of modern family planning methods. You can read the results of our literature review in a new report published in Studies in Family Planning. In this blog post, we detail the methodology of our review and then provide recommendations for bringing greater consistency and comparability to social norm measures.

Generating evidence for going to scale in multisectoral nutrition programming

Tags: , , ,

How many proven effective public health projects have you been involved in that have been scaled-up to have national – or even global – impact? Those of us working in this field know it’s all too rare for an intervention to catch on and spread like wildfire. Rather, most successful interventions fizzle out when project funds dry up or donor interest is gone.

So, what can we do to increase the chances that an effective intervention is adopted? At FHI 360, we are trying to answer this question using the USAID-funded Strengthening Multisectoral Nutrition Programming through Implementation Science Activity (MSNP) as the foundation for two new multisectoral research studies.

Implementation research: The unambiguous cornerstone of implementation science

Tags: , ,

Three years ago, I wrote that there was no clear consensus on the definition of implementation science in global health. Today we are no closer to agreement. In 2015, Thomas Odeny and colleagues published a review that showed 73 unique definitions, and the Implementing Best Practices Initiative conducted a survey that showed no consensus on a definition across 27 international organizations. To confuse matters more, the term is used interchangeably with implementation research; operations research; monitoring, evaluation and learning; real-world research; and other non-research approaches focused on refining implementation strategies.

Since implementation science is defined so broadly and because no one can agree on a definition, what is the key to making sense of it all? In my work, I focus on the most well-defined, understandable sub-domain of implementation science: implementation research. I argue that it’s the heart and soul of implementation science. In this blog post, I’ll define implementation research and outline why it makes for such a useful concept.

Investigating STEM and the importance of girls’ math identity

Tags: , , , , , , , ,

Despite significant progress in closing the gender gap in science, technology, engineering and math (also known as STEM), inequities in girls’ and women’s participation and persistence in math and across STEM education and careers remain. According to the U.S. Census Bureau, women make up nearly half of the U.S. workforce but just 26 percent of STEM workers, as of 2011. Within STEM, the largest number of new jobs are in the computer science and math fields; however, the gender gap in these careers has increased rather than decreased, with female representation decreasing since 2000.

While much of the current STEM research has focused heavily on the barriers and reasons why there aren’t more girls or women in STEM-related fields, here we argue that future research must focus on how to design and develop effective approaches, practices, situations, tools, and materials to foster girls’ interest and engagement.

5 features of a monitoring, evaluation and learning system geared towards equity in education

Tags: , , , , ,

A great accomplishment arising from the era following 1990’s World Declaration on Education for All in Jomtein, Thailand, is recognition of the gender gap in education, and the mandate for sex-disaggregated reporting from funders and multilateral agencies. Data on the dramatic access and outcome disparities between male and female learners created demand for programming focused on gender inequity. Twenty-seven years after Jomtien, there is a substantial amount of evidence on solutions that build gender equity in education, and on how education systems need to adapt to help girls and boys overcome gender-related institutional barriers.

The Education Equity Research Initiative, a collaborative partnership led by FHI 360 and Save the Children, seeks to create the same dynamic around other aspects of inequity in education – be it poverty, ethnic or racial disadvantage, migration status, or disability. As a community, we create frameworks, modules, and tools, so that little by little the reams of data that get produced include a consistent set of questions around equity of program participation and equity of outcomes.

My previous blog post speaks to the need to be deliberate in building a monitoring, evaluation and learning system that generates the data and analysis that help answer the question: are we improving education equity through our programming and policy? But how do we operationalize equity in education, in the context of education development programming? In Mainstreaming Equity in Education, a paper commissioned by the International Education Funders Group, our collaborative begins by recognizing that an equity-oriented monitoring, evaluation and learning (MEL) system around a program or set of interventions has an essential purpose not just to produce data on scope and coverage, but to allow for depth of understanding around who benefits and doesn’t, and offer actionable information on what to do about it. Here I outline five features that describe such a learning system.

To achieve equity in education, we need the right data

Tags: , , , ,

As we work to realize the Sustainable Development Goals (SDGs) related to education, it is the responsibility of every funding, implementing and research organization internationally to be asking questions about our own contributions to building equity in education. While a great amount of data gets produced in the course of education projects, only a fraction provides the detail that is needed to assess intervention impact on different equity dimensions. At the technical and implementation level, organizations need to capture and use the necessary evidence to understand and respond to inequity in education provision and outcomes.

To do that, we need to be deliberate in building monitoring, evaluation and learning systems that generate the data and analysis that help answer the question: are we improving education equity through our programming and policy? Disaggregated data are the first step to understanding who is left behind in obtaining a quality education for successful and productive adulthood. My recent paper, Mainstreaming Equity in Education, outlines key issues and challenges that need to be addressed around equity in education, and provides a way forward for mainstreaming equity-oriented programming and data analysis. In this blog post, I show how disaggregated data can make a difference to understanding impacts. I then provide evidence that, unfortunately, such disaggregated data are rarely collected.

Faster, cheaper and safer: Do UAVs live up to the hype?

Tags: , , ,

Unmanned aerial vehicles (UAVs) – commonly called drones – have captured the imagination of all who know the challenges of last-mile delivery. Proponents argue that they’ll make delivery faster, cheaper and safer. Being able to transport critical supplies to remote areas faster and for less cost without sacrificing quality is the Holy Grail of many development programs. Yet, there is very little evidence demonstrating whether UAVs live up to the hype of faster, cheaper and safer. Moreover, can they do it without sacrificing quality?

Many UAV flights have been conducted, but very few have shared details about how these projects were implemented, what they cost, how they would integrate with the health system, what impact they are having on outcomes, or what lessons have been learned. Without this information, how will decision makers know if they’re likely to be useful or not? Below I describe some of the delivery UAV research already available that I think is useful for decision makers right now.

Moving from indicators of facility coverage and use toward capability to reduce maternal mortality

Tags: , ,

A primary indicator that tracked progress toward reducing maternal mortality prior to the Sustainable Development Goals (SDGs) was the percentage of women delivering with a skilled birth attendant. The assumption was that skilled attendants would ensure women receive quality, evidence-based services. It is true that more women are delivering with skilled attendants now than in 1990, and that more deliveries are taking place in health care facilities; it is also true that ratios of maternal mortality have decreased. Yet, the relationship between increased facility deliveries and reduced mortality within countries is mixed. Why is that? One explanation could be that a quality gap remains.

To understand this better, we need to move away from relying on one-dimensional indicators of coverage and use toward indicators that more adequately capture the complexity of facility capability and quality. This will help the maternal health community better track changes at health facilities and support national and subnational entities to identify and target needed interventions. Together with Oona Campbell of the London School of Hygiene and Tropical Medicine (LSHTM) and colleagues, our team analyzed data from 50 countries in an article in The Lancet Maternal Health Series to characterize the availability of critical infrastructure and services where women deliver. Here I present some of those findings that are also included in my lecture that is part of a free online Maternal Health Series course developed by LSHTM.

Quarterly recap of FHI 360’s blog on research and evaluation, October–December 2017

Tags: ,

As we close out the year 2017, I want to take a few moments to highlight the 16 posts from our blog this quarter. We feature posts from FHI 360 thought leaders writing about new and innovative evidence, research and evaluation practice, and analysis of methodologies used to better address global development needs.

Beyond DALYs: New measures for a new age of development

Tags: , , ,

Integrated development is an approach that employs the design and delivery of programs across sectors to produce an amplified, lasting impact on people’s lives. Integrated programs are based on the premise that the interaction between interventions from multiple sectors will generate benefits beyond a stand-alone intervention. As human development interventions take this more holistic approach, funders and program implementers alike recognize the importance of understanding the impact of multi-sector interventions. While we can continue to use sector specific measures of impact – for instance, Disability-Adjusted Life Years (DALYS) – this creates an apples and oranges problem if one wishes to compare across interventions. It begs the question: can we move towards a single performance metric to assess effectiveness and cost-effectiveness of integrated programs? Here at FHI 360, we are attempting to answer this question by developing a new measurement tool – MIDAS (Measuring the Impact of Development Across Sectors).

In this post, we discuss the need for better measures, describe our conceptual framework, and then present some of the key components of the tool. We conclude by demonstrating how the tool worked when piloted in a current FHI 360 project.

Outsmarting TB using research and collaboration

Tags: , ,

Tuberculosis (TB), which in 2016 killed an estimated 1.7 million people, is an ancient disease found in the bones of mummies dug up from Peru. It has evolved with humans, and like other successful organisms, finds ways to avoid death, so it can thrive and spread to the next person. Trying to get ahead of this successful adversary requires pursuing a consistent, aggressive research agenda aided by international collaboration.

Strong leadership advocating for TB research, like the leadership offered by our teams at FHI 360, is making it possible for the hardest hit regions of the world to do the kind of research that will have a major impact locally – and globally. In this blog post, I will describe two of our major TB research endeavors, recently highlighted during the 48th Union World Conference on Lung Health in Guadalajara, Mexico, that focus on the importance and promise of coordinated, collaborative research efforts in high-burden countries.

Increasing HIV detection with incentive-based, peer-referral approach known as Risk Tracing Snowball Approach

Tags: , ,

Cambodia has achieved measurable success in its fight against HIV. Prevalence has dropped more than 60% from 1998 to 2015, and the number of new HIV cases fell more than 80% over the same period. However, as in many countries, going the last mile to fully eliminate the AIDS epidemic requires innovative approaches to reach individuals unaware of their HIV status – especially among hard-to-reach key populations – and link them to treatment to achieve viral load suppression.

A consortium comprised of three NGOs – KHANA, FHI 360, and Population Services International (PSI) – working with the HIV/AIDS Flagship Project funded by USAID came up with an intervention study idea. Would we detect more newly identified HIV cases if we asked people that walked-in for HIV testing (presumably high-risk) to refer their peers whom they think are at risk for HIV infection for testing? We hypothesized yes. If we were right, it would be brilliant and we could help Cambodia’s National Centre for HIV/AIDS, Dermatology and STD (NCHADS) scale-up the intervention and avert hundreds of new infections every year. The same Risk Tracing Snowball Approach (RTSA) had proven to be successful among high-risk heterosexuals in the United States, among drug users in Greece, and heterosexual couples in China. We set out to determine if it would work in Cambodia, and I describe our intervention design and evaluation outcomes in this blog post.

From the Red Book to the Blue Book: Advancing HIV surveillance among key populations

Tags: , , ,

The World Health Organization (WHO) recently published new global guidelines on how to do biobehavioral surveys related to HIV infection in a document known as the Blue Book. The Blue Book replaces the previous Red Book of guidelines for such surveys. The updated guidance keeps the global health community abreast of the evolving HIV epidemic, which has led to 37 million people currently living with HIV infection. Biobehavioral surveys provide population-level estimates for the burden of HIV disease and HIV-related risk factors, and they allow estimation of the coverage of prevention and treatment services for key populations that are at increased risk for HIV. Advances in available data and changes in the epidemic rendered the survey tools and guidelines in the Red Book out-of-date. In this blog post, we’re going to highlight how the new Blue Book addresses these critical gaps to deliver a manual better suited to the era of ending AIDS.

My tribute to Peter Lamptey’s lifelong contributions to global health

Tags: , ,

Known around the world, Prof. Peter Lamptey is a global health champion in any light. Many of you may know him from his early involvement in the global HIV response or from his fight to raise public awareness of noncommunicable diseases (NCDs). I first heard Prof. Lamptey speak about the role of laboratory science in the NCD response at a conference plenary hosted by the African Society for Laboratory Medicine, my former employer. A compelling talk for sure, but notably his plenary was also my first significant introduction to FHI 360’s research.

Fast forward several years. I’m now editor of the very FHI 360 research blog you are reading, and Prof. Lamptey retires from FHI 360 this month. In a selfishly full-circle moment for me, I want to add my tribute to Prof. Lamptey’s immeasurable contributions to global health. Not with a speech or a party (though there have been those too), but with a blog post highlighting a few of his evidence-centered publications. To help celebrate nearly four decades of Prof. Lamptey’s accomplishments, here are three of those publications that I find interesting.

How fast is rapid? What I learned about rapid reviews at Global Evidence Summit

Tags: , ,

On the first day of the Global Evidence Summit (GES) in Cape Town, South Africa, in spite of jet lag and conference exhaustion, I eagerly attended the late afternoon session titled, “A panoramic view of rapid reviews: Uses and perspectives from global collaborations and networks.” During my time working at the International Initiative for Impact Evaluation (3ie), I was converted into a true believer in systematic reviews. Before my conversion I knew that literature reviews suffered from bias due to a researcher’s selection of studies to include, but I was less aware of the established methods for conducting unbiased (well, more accurately, less biased) reviews. Systematic reviews were the answer! As I worked with and on systematic reviews, however, I became frustrated to see how much time and how many resources they can take. I was keen to learn more about rapid reviews. This session provided a great overview of rapid review approaches and some of the recent advances.

5 lessons for using youth video diaries for monitoring and evaluation

Tags: , , , ,

How do you measure the process of change that young people undergo as they engage with a program as part of that program’s monitoring and evaluation (M&E)? The Sharekna project in Tunisia uses youth video diaries to gain insight into the transformations that youth make as they develop resilience against external stresses like violent extremism. In this blog post, I provide five lessons from our Sharekna project to guide future M&E and research activities using or considering the use of youth video diaries.

Five things you can do today to participate in open scholarship

Tags: ,

October 23–29 was 2017’s Open Access Week, and I was fortunate to attend the Open Scholarship for the Social Sciences symposium (O3S), one of the hundreds of events around the world to promote open access, with the goal to “increase the impact of scientific and scholarly research.” Open scholarship, as defined by the Association of Research Libraries, “encompasses open access, open data, open educational resources, and all other forms of openness in the scholarly and research environment while also changing how knowledge is created and shared.” The symposium brought together professors, graduate students, librarians, archivists, and researchers from the government and non-profit sectors. Jeff Spies, co-founder of the Center for Open Science, gave the keynote lecture on the second day, and one of his recommendations is that scholars new to open scholarship should start incrementally. Building on his suggestions, I offer five things you can do today to participate in open scholarship.

New research on open government data in developing countries: What we can learn from case studies

Tags: ,

A group of advocates met outside San Francisco in 2007 to pen eight principles of open government data with the intent of initiating a new era of democratic innovation and economic opportunity. In the decade since, governments are slowly opening up their data sets as a public resource, and even adopting other open data as their official data. More than 15 national governments (and 25 local governments) have now adopted the principles of open data and its potential to foster greater transparency, empower citizens, combat corruption, and generally enhance governance. Despite the gradual movement towards open government data, very little is actually known about its impact – what, where, how and under what conditions does open data work?

To address this aperture, the Development Informatics (DevInfo) team of USAID’s Global Development Lab and FHI 360’s Mobile Solutions Technical Assistance and Research (mSTAR) project, in collaboration with a research consortium from The GovLab and New York University, embarked on a research project to clarify the value of existing open data initiatives in developing countries. In this blog post, I will use examples from our team’s recent research to highlight what people have done with open government data in developing countries.

20 years of advances in global health data: Now there’s an app for that

Tags: ,

Bill Gates appreciates the importance of high-quality data to guide decisions. He was the keynote speaker at the 20th anniversary symposium of the Global Burden of Disease (GBD) study in Seattle recently. Gates was followed by Jim Kim, President of the World Bank. These two very heavy hitters – one, the richest man in the world, and one who runs the largest investment bank for developing nations – are huge fans of high-quality data that can be used to make decisions and guide investments in global health and development. I’m using the occasion of this anniversary to share more about the GBD study with our readers and to highlight the innovative use of data to improve global health.

Evidence in a post-truth world from Global Evidence Summit

Tags: ,

Recently I wrote a post about the Global Evidence Summit, which I attended in September. I commented that one thing I really liked about the conference is that it had some great plenary sessions. In this post, I highlight some of the key ideas that came from the fourth plenary of the conference, titled “Evidence in a Post-Truth World.”

The plenary started with Trish Greenhalgh, who is a Professor of Primary Care Health Sciences at the University of Oxford. Greenhalgh’s bio for the conference lists three areas her work covers, one of which is the complex links (philosophical and empirical) between research, policy and practice. Her speech focused on these philosophical links. She started with Aristotle’s On Rhetoric, in which he lays out three means of persuasion for an orator. The first is logos, which we can think of as evidence. The second is ethos, which we can think of as credibility, and the third is pathos, which we can think of as emotion. Greenhalgh suggested that the post-truth world is one where logos – evidence – is no longer a useful tool of persuasion, leaving us only with the other two.

Research improves handwashing programs by uncovering drivers of behavior change

Tags: , ,

Evidence on the health and social benefits of handwashing is strong. We know that handwashing can prevent up to 40% of diarrheal diseases, and can lead to fewer school absences and increased economic productivity. However, many people don’t wash their hands at critical times, even when handwashing facilities are available. While research on behavior change has shown examples of approaches that lead to increased rates in handwashing, we’re still seeking to understand why people wash their hands, and how motivation for handwashing can be translated into programs that result in effective behavior change.

In celebration of Global Handwashing Day on October 15, USAID and the Global Handwashing Partnership – an international coalition with a Secretariat hosted by FHI 360 – organized a webinar on drivers for handwashing behavior change. The Partnership’s work focuses on promoting handwashing with soap as key to health and development, with an emphasis on connecting practitioners with research findings to inform their work. Our webinar speakers provided two examples of how research is exploring behavior change from cognitive (how we think about and understand handwashing) and automatic (how we can be unconsciously prompted to wash our hands) standpoints. In this blog post, I’ll summarize how the two examples show different ways of understanding human behavior and discuss how the findings help us understand what drives behavior change for handwashing.

What I learned about evidence networks at the Global Evidence Summit

Tags: ,

A couple of weeks ago, I was fortunate to attend the Global Evidence Summit (GES) in Cape Town, South Africa. GES was billed as being the first conference of its kind, jointly organized by the Cochrane Collaboration, the Campbell Collaboration, and several other groups to focus on evidence-based policy making across sectors. Those of us who attended the What Works Global Summit in London last September considered GES the second conference of its kind, and we were excited to reconnect with each other this year in Cape Town.

The conference brought together researchers and policy analysts in the fields of health, education, and international development to explore that long and often tortuous path between a single study and a policy or program that is evidence based. Sessions covered topics such as evidence mapping, systematic reviews, rapid reviews, standard and guideline setting, big data, and policy engagement. In this post, I report on what I learned about evidence networks in the first day’s plenary. For readers interested in learning more about the conference itself, I provide a quick review at the end of the post.

Call for papers: Optimizing the impact of key population programming across the HIV cascade

Tags: , ,

Key populations – including men who have sex with men, sex workers, transgender people, and people who inject drugs – shoulder a disproportionate burden of HIV. UNAIDS estimates that between 40 and 50 percent of all new HIV infections among adults worldwide occur in these key populations and among their sex partners. Reaching members of these communities with evidence-based interventions that improve their access to and uptake of services across the HIV prevention, care, and treatment cascade is essential to achieving the UNAIDS 90-90-90 goals. In this post, I highlight a new call for papers that will focus on new evidence and data-driven strategies for improving key population programming across the HIV cascade.

Show me the evidence: Cultivating knowledge on governance and food security

Tags: , , , ,

I recently participated in a salon on integrating governance and food security work to enhance development outcomes. Convened by the LOCUS coalition and FHI 360, the salon gathered experts in evaluation, governance and food security to review challenges and best practices for generating evidence and knowledge. A post-salon discussion recorded with Annette Brown and Joseph Sany speaks to the gaps in evidence and the need to more accurately measure how governance principles influence food security outcomes.

I came out of the salon conversation thinking that while there was a hunger for evidence, there are still large gaps and significant differences within the literature on things as basic as definitions. That being said, I wanted to dig a bit more into what evidence was actually out there and think about what needs to be done to move this budding evidence base forward. In this post, I highlight three pieces of interesting research that contribute to the evidence base on governance and food security integration, and then propose a few suggestions on how to grow that knowledge base.

Seizing an opportunity to collect user experience data

Tags: , , ,

Contraceptive clinical trials routinely collect vast amounts of data, but what new data can we collect about method acceptability during this research stage? If a method has reached the clinical trial phase, we’d hope formative acceptability research was already conducted to inform its development and to determine if a potential market exists. At this point in the game, few changes can be made to a method based on acceptability findings… so what’s left to learn?

Hypothetically speaking… If we build it, will they come?

Tags: , , , ,

Contraceptive product development efforts, to date, have largely been premised on the notion that if we build it, they will come. Primary attention has been paid to making products that work, with the assumption that if women want to avoid pregnancy, they will use them. While the desire to avoid pregnancy is an extremely powerful motivator, it is not enough. For many women, the fear of contraceptive side effects or the challenge associated with accessing and using contraceptives is greater than the burden of another pregnancy.

Some argue that to improve uptake and continuation rates, we need to improve provider counseling around contraceptive side effects and address socio-cultural barriers, such as inequitable gender norms, that prevent women from using contraceptives. These efforts – while essential – are still insufficient. Even the most informed and empowered women can have unintended pregnancies when they don’t have access to acceptable contraceptives – methods that meet their particular needs in their particular life stage and context.

As researchers, how do we shift the model of contraceptive development to focus first on what users want from an ideal contraceptive?

FHI 360’s R&E Search for Evidence quarterly highlights

Tags: ,

The Research and Evaluation Strategic Initiative team published 14 posts during the last quarter in our blog, R&E Search for Evidence. For those not familiar with our blog, it features FHI 360 thought leaders who write about research and evaluation methodology and practice, compelling evidence for development programming, and new findings from recent journal publications. We have published 31 original posts to date! In this post, I will summarize our most recent posts and highlight some of my favorites.

Exploring the parameters of “it depends” for estimating the rate of data saturation in qualitative inquiry

Tags: , ,

In an earlier blog post on sample sizes for qualitative inquiry, we discussed the concept of data saturation – the point at which no new information or themes are observed in the data – and how researchers and evaluators often use it as a guideline when designing a study.

In the same post, we provided empirical data from several methodological studies as a starting point for sample size recommendations. We simultaneously qualified our recommendations with the important observation that each research and evaluation context is unique, and that the speed at which data saturation is reached depends on a number of factors. In this post, we explore a little further this “it depends” qualification by outlining five research parameters that most commonly affect how quickly/slowly data saturation is achieved in qualitative inquiry.

Paper-based data collection: Moving backwards or expanding the arsenal?

Tags: , , ,

Considerable effort has gone into perfecting the art of tablet data collection, which is the method typically used to collect data for evaluating education programs. The move away from paper has been a welcome shift, as for many research and evaluation professionals, paper conjures images of junior staff buried under boxes of returned questionnaires manually entering data into computers. Indeed, when our team recently began experimenting with paper-based data collection in our education projects, one colleague with decades of experience remarked warily, “It just seems like we’re moving backwards here!”

Improvements in the software, however, allow us to merge new technology with “old school” methods. Digital scanners can now replace manual data entry, powered by software that is able to read completed questionnaires, and quickly format responses into a data set for subsequent analysis. Our team has been experimenting with a new digital scanning software called Gravic to easily and quickly enter data from paper-based surveys. The Gravic digital scanning tool introduces flexibility and opens a new option for data collection across our projects, but not without some drawbacks. In this post, we make the case for paper surveys combined with the Gravic software and then review the drawbacks.

Four tips for turning big data into big practice

Tags: , ,

Thanks to Annette Brown’s brilliant post last month, we now know what big data and data analytics are. Fantastic! The next question is: so what? Does having more data, and information from that data, mean more impact?

I’m lucky enough to be part of the Research Utilization Team at FHI 360, where it’s my job to ask (and try to answer) these kinds of questions. The goal of research utilization (also known by many other names) is to use research – and by implication data – to make a real difference, by providing the right information, to the right people, at the right time, in ways they can understand and be supported over time to use.

So, without further ado, I present to you four practical tips for turning BIG data into BIG practice.

The science of beating HIV and AIDS

Tags: , ,

The International AIDS Society (IAS) Conference on HIV Science (IAS 2017) in Paris last week brought together over 6,000 scientists, clinicians, public health practitioners and officials to review the state of the science intended to control and eventually end the HIV/AIDS epidemic. The central thrust of the global effort to control the epidemic is achieving the 90-90-90 targets set by the Joint United Nations Programme on HIV/AIDS (UNAIDS). These targets state that, by 2020, 90% of those living with HIV know their status, 90% of known HIV-positive individuals receive sustained antiretroviral therapy (ART), and 90% of individuals on ART have durable viral suppression. We know that HIV-infected persons with viral suppression, while not cured, do not transmit HIV infection – hence the focus on treatment, which is prevention.

While some countries have made encouraging progress, we are far short of the global 90-90-90 targets, and worse, there were 1.8 million new HIV infections in 2016. We need science to inform the way forward to reach the targets. In this post, I will report on some of the conference presentations around two major themes: 1) generating and applying evidence to optimize the use of current tools, and 2) developing new and improved methods for HIV prevention, care and treatment.

Book Review: Rigor Mortis – Harris says the rigor in science is dead

Tags: ,

Richard Harris is a long-time, well-regarded science reporter for National Public Radio, so one has to wonder how he (or the publisher) came up with the title of his new book on the current state of biomedical science: “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions.” Why is it that so many non-fiction books these days have a short, dramatic title intended to catch your eye on an airport bookrack, followed by a subtitle with an alarming description suited for a checkout-line tabloid? Perhaps I just answered my question. Rigor Mortis is itself a play on words: the medical term refers to the stiffness of a body observed in death; here it indicates that rigor in science is dead. I agree with Harris that there are some fundamental issues in the practice of science that need correction, but it would be unfortunate if Harris’s criticisms are used in support of a retreat from science.

Teasing apart stigma and knowledge as barriers to HIV testing: A study with young Black adults in Durham, North Carolina

Tags: , , ,

What experiences do young Black adults in Durham, North Carolina, have with HIV testing? And what influence does stigma play on those experiences? To answer these questions, my co-authors and I recently published the results of a community-based participatory research (CBPR) study: Relationship between HIV knowledge, HIV-related stigma, and HIV testing among young Black adults in a southeastern city. Our cross-sectional survey examined barriers, facilitators and contributors to HIV testing. This blog post summarizes our findings and provides guidance on HIV prevention strategies.

Mobile-based surveys: Can you hear me now?

Tags: , ,

The technologies and processes we now have at our disposal to locate individuals and populations, push information to them, and gather information from or about them are being developed and refined at break-neck speed. Tools utilizing mobile technologies alone – voice services, SMS, Interactive Voice Recognition (IVR), Unstructured Supplementary Service Data (USSD), location-based services, data-based survey apps, chatbots – have introduced new opportunities to reduce the time, cost, uncertainty and risk in gathering data and feedback. As mobile coverage and access have expanded globally, governments, marketing firms, research organizations and international development actors alike have been iterating on approaches for using mobile-based surveys in their initiatives and programs. This post presents key takeaway lessons regarding the methodology, feasibility and suitability of using mobile surveys based on experience from our Mobile Solutions, Technical Assistance and Research project (mSTAR) in Mozambique.

Don’t waste evidence on the youth! Recent data highlights education and employment trends

Tags: , , ,

A recent New York Times article describes a major contemporary challenge facing governments: the world has too many young people. A quarter of the world’s population is young (ages 10-24), and the majority live in developing countries. Policy makers are struggling with high levels of youth unemployment in every country, but a key challenge in developing countries has been a lack of data on education and employment characteristics. To fill this evidence gap, FHI 360’s Education Policy and Data Center (EPDC) recently added country-level Youth Education and Employment profiles to the resources available on our website. In this post, I describe the data and how they were collected, and I give some examples of how these data can be used to inform policy making and program design.

LINKAGES research digest highlights ability of young key populations to access and remain in the HIV care cascade

Tags: , ,

On a quarterly basis, the LINKAGES project releases a research digest comprised of the latest peer-reviewed article abstracts related to HIV and key populations (KPs) – sex workers, men who have sex with men (MSM), transgender people, and people who inject drugs. LINKAGES is PEPFAR and USAID’s largest global project dedicated to using evidence-based approaches for reducing HIV transmission among KPs and improving their enrollment and retention in care. KPs have the highest risk of contracting HIV and often face formidable barriers to accessing prevention, care, and treatment services. The research digest keeps implementers and researchers up to date on the rapidly expanding evidence base pertaining to HIV services for KPs on a global scale. So, what did we learn about young KPs and HIV in the past quarter?

Don’t spin the bottle, please: Challenges in the implementation of probability sampling designs in the field, Part II

Tags: ,

In part I, I reviewed how the principles of probability samples apply to household sampling for valid (inferential) statistical analysis and focused on the challenges faced in selecting enumeration areas (EAs). When we want a probability sample of households, multi-stage sampling approaches are usually used, where EAs are selected in the first stage of sampling and households then selected from the sampled EAs in the second stage. (Additional stages may be added if deemed necessary.) In this post, I move on to the selection of households within the sampled EAs. I’ll focus on the sampling principles, challenges, approaches and recommendations.

Sample size is not king: Challenges in the implementation of probability sampling designs in the field, Part I

Tags: ,

So you want a probability sample of households to measure their level of economic vulnerability; or to evaluate the HIV knowledge, attitudes, and risk behaviors of teenagers; or to understand how people use health services such as antenatal care; or to estimate the prevalence of stunting or the prevalence and incidence of HIV. You know that probability samples are needed for valid (inferential) statistical analysis. But you may ask, what does it take to obtain a rigorous probability sample?

Big data and data analytics: I do not think it means what you think it means

Tags: ,

With so many players latching on to the idea of big data these days, it is inconceivable that everyone has the same definition in mind. I’ve heard folks describe big data as just being the combination of existing data sets while others don’t consider data to be big until there are hundreds of thousands of observations. I’ve even seen the idea that big data just means the increasing availability of open data. There is a similar challenge with data analytics. On one end of the spectrum, data analytics is just data analysis, but with a cooler name. On the other, data analytics involves big data (really big data) and machine learning. I needed to get a grasp on the various terms and concepts for my work, so I thought I’d share some of what I learned with you. Prepare to learn.

Turning lemons into lemonade, and then drinking it: Rigorous evaluation under challenging conditions

Tags: , , , ,

In early 2014, USAID came to the ASPIRES project with a challenge. They requested that our research team design and implement a prospective quantitative evaluation of a USAID-funded intervention in Mozambique. The intervention centered on a combined social and economic intervention for girls at high risk of HIV infection. As a research-heavy USAID project focused on the integration of household economic strengthening and HIV prevention/treatment, ASPIRES was prepared for the task.

The challenges, however, came in the particulars of the evaluation scenario. The research team set its mind to identifying the best possible design to fulfill the request. This is to say, we sought out a recipe for lemonade amidst these somewhat lemony conditions.

Improving the evaluation of quality improvement

Tags: , , ,

The use of quality improvement approaches (known as “QI”) to improve health care service outcomes has spread rapidly in recent years. Although QI has contributed to the achievement of measurably significant results as diverse as decreasing maternal mortality from post-partum hemorrhage to increasing compliance with HIV standards of care, its evidence base remains questioned by researchers. The scientific community understandably wants rigorously designed evaluations, consistency in results measurement, proof of attribution of results to specific interventions, and generalizability of findings so that evaluation can help to elevate QI to the status of a “science”. However, evaluation of QI remains a challenge and not everyone agrees on the appropriate methodology to evaluate QI efforts.

In this post, we begin by reviewing a generic model of quality improvement and explore relevant evaluation questions for QI efforts. We then look at the arguments made by improvers and researchers for evaluation methods. We conclude by presenting an initial evaluation framework for QI developed at a recent international QI conference.

Applying the power of partnership to evaluation of a long-acting contraceptive

Tags: , ,

A long-acting, highly effective contraceptive method called the levonorgestrel intrauterine system (LNG-IUS) was first approved for use almost thirty years ago. Since then, it has become popular and widely used in high-income countries. However, until recently, the high cost of existing products has limited availability of the method in low-resource settings. Now, new and more affordable LNG-IUS products are becoming available. In 2015, USAID convened a new working group comprised of a diverse group of donors, manufacturers, research and service delivery partners to help accelerate introduction of the method. Through this platform, FHI 360 and other members contributed to the development of a global learning agenda – a series of research questions that donors and implementing agencies agreed are priorities to evaluate the potential impact of the LNG-IUS. Working group members then implemented a simple but innovative approach to making limited research dollars go farther in addressing the learning agenda questions.

Photo credit: Garth Cripps/Blue Ventures; used with permission

Research on integrated development: These are a few of my favorite things

Tags: , , ,

You may have recently noticed an uptick in conversations within development circles on this underlying theme: A full realization of the new Sustainable Development Goals (SDGs) requires critical changes in what we do based on understanding the significant linkages between social, economic and environmental sectors. Intuitively, that seems fairly sensible. These linkages suggest that we should be using integrated approaches. But what do we know about the effectiveness of intentionally integrated approaches to development? In this post, I share a few of my very favorite examples of research that provide evidence on the effectiveness of integrated approaches.

Riddle me this: How many interviews (or focus groups) are enough?

Tags: ,

The first two posts in this series describe commonly used research sampling strategies and provide some guidance on how to choose from this range of sampling methods. Here we delve further into the sampling world and address sample sizes for qualitative research and evaluation projects. Specifically, we address the often-asked question: How many in-depth interviews/focus groups do I need to conduct for my study?

Why should practitioners publish their research in journals?

Tags: ,

My team recently encouraged a colleague who is collecting and analyzing data in one of our projects to submit the analysis and results to a journal. His response was “why?” He argued that his audience is not reading journals, that there are other mechanisms for getting feedback, and that since he is a technical person and not a researcher, he doesn’t need citations. My colleague believes publishing in other ways is equally valued. This post is my attempt to convince him, and you, that practitioners who are conducting research should publish in journals.

Academia takes on global health

Tags: , ,

The Consortium of Universities for Global Health (CUGH) held their 8th Annual Conference in Washington DC this week. More than 1,700 people from every corner of the globe gathered for three days to explore how the world’s academic institutions can best contribute to improving global health. This year’s meeting was particularly interesting given the contrast between current prospects for financial support for global health and the trajectory of support over the last 15 years. That contrast made several of the key topics discussed at CUGH even more salient to me.

7 takeaways from changes in US education grant programs

Tags: , , ,

I recently had the opportunity to attend a workshop on the U.S. Department of Education’s (ED) new Education Innovation and Research (EIR) grant competition. EIR is the successor to the Investing in Innovation (i3) grant program, which invested approximately $1.4 billion through seven competitions from 2010 to 2016 to develop, validate and scale-up evidence-based programs in education. Like i3, EIR implements a tiered award structure to support programs at various levels of development. This blog post summarizes my seven takeaway points from the workshop. These seven points highlight the main changes in the transition from i3 to EIR.

Evaluation ethics: Getting it right

Tags: , ,

My personal interest in evaluation ethics goes back to my days at MDRC, where I was responsible for developing survey questions and accompanying protocols to capture domestic violence among mothers who participated in a DHHS-funded welfare-to-work program called JOBS (Job Opportunity and Basic Skills). MDRC was about three and a half years into a five-year evaluation of JOBS when our program officer asked us to include questions specifically about domestic violence in the next wave of our survey. As I recall, no one wanted to touch this – too sensitive, too volatile, too many ethical loopholes to jump through – so, as the lowest rung in the food chain at the time, I was given the task. With that, I entered the world of evaluation ethics where I learned quickly the challenges in getting it right, and the consequences of getting it wrong.

A pathway for sampling success

Tags: ,

The credibility and usefulness of our research and evaluation findings are inextricably connected to how we select participants. And, let’s admit it, for many of us, the process of choosing a sampling strategy can be drier than a vermouth-free martini, without any of the fun. So, rather than droning on comparing this or that sampling strategy, we present a relatively simple sampling decision tree.

How many scientific facts are there about science, technology, and innovation for development?

Tags: , ,

There is a lot of excitement these days about science, technology, and innovation and the potential for these activities to contribute to economic and social development globally. The flurry of activity begs the question, how much of this excitement is supported by scientific facts? To help answer this question, the U.S. Global Development Lab at USAID commissioned a project to create and populate a map of the evidence base for science, technology, innovation, and partnerships (STIP). As part of the project, scoping research was conducted to identify not just where there are evidence clusters and gaps, but also where the demand for new evidence by stakeholders is the greatest. In the recently published scoping paper, I and my co-authors analyze the data in the map together with the information from the stakeholders to recommend priorities for investment in new research on STIP. While there is good evidence out there, new research is necessary for strategies and programming to fully benefit from scientific facts. In this post, I briefly describe the research we conducted, summarize a few of the many findings, and list some of our recommendations.

Gearing up to address attrition: Cohort designs with longitudinal data

Tags: , , , , ,

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Learning about focus groups from an RCT

Tags: , , ,

In my previous job at 3ie I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. use an RCT to compare the performance of individual interviews against focus groups for collecting certain data.

Mining for development gold: Using survey data for program design

Tags: , , , ,

As global health resources become more scarce and the prevalence of international crises increase, it is more important than ever that we design and target development programs to maximize our investments. The complexity of the applicable social, political and physical environments must be taken into consideration. Formative research can help us to understand these environments for program design, but formative research is often skipped due to budgetary, time or safety concerns that constrain the collection of new data. What many overlook is the vast untapped potential of existing household survey data that are rigorously collected, clean and freely available online around the world. By mining existing survey data, we can conduct the formative research necessary to maximize development impact.

Should PEPFAR be renamed the “President’s Epidemiologic Plan for AIDS Relief”?

Tags: , ,

The effective use of data within PEPFAR has played a central role in getting us to the point where we can finally talk about controlling the HIV epidemic and creating an AIDS-free generation. PEPFAR’s transition from an emergency approach to one driven by real-time use of granular, site-level data to guide programmatic investments has contributed to achieving epidemic control. In view of this improved use of data, perhaps the “Emergency” in PEPFAR should now be changed to “Epidemiologic.”

Emojis convey language, why not a sampling lesson too?

Tags: ,

To help folks build stronger sampling plans for their research and evaluation projects, we present a series of three sampling posts. This first blog post explains sampling terminology and describes the most common sampling approaches using emoji-themed graphics. Ready to get started? Sit down, hold on to your hats and glasses, and enjoy the sampling ride!

Beyond research: Using science to transform women’s lives

Tags: , , , ,

It was a warm spring day in 2011, and eight of my colleagues were helping me celebrate the realization of a long-awaited policy change in Uganda by sipping tepid champagne out of kid-sized paper cups. A colleague asked me, amazed, “How did you guys pull this off? What’s your secret to changing national policy?” I offered up some words about patience, doggedness, and committed team work. My somewhat glib response is still true, but since then I’ve thought a lot about what it takes to get a policy changed.

How to find the journal that is just right

Tags: ,

Goldilocks had it easy. She only had three chairs, three bowls of porridge and three beds to choose from, and the relevant features were pretty straightforward. It is not so easy to pick the right journal for publishing your research. First, there are hundreds of journals to choose from. Second, there are various features that differentiate them. And finally, some journals, like the three bears, are predatory and should be avoided. So how to find the journal that is just right for your research?

Null results should produce answers, not excuses

Tags: , , ,

I recently served on a Center for Global Development (CGD) panel to discuss a new study of the effects of community-based education on learning outcomes in Afghanistan. (Burde, Middleton, and Samii 2016) This exemplary randomized evaluation finds some important positive results. But the authors do one thing in the study that almost all impact evaluation researchers do – where they have null results, they make, what I call for the sake of argument, excuses.