Unpacking PLCs: What evidence do we have about professional learning communities and how can we produce more?

 
Tags: , , ,

Improving teaching quality to ensure that all children in school obtain the skills and knowledge they are meant to acquire has become a key objective internationally. Together with the growing recognition of the need to improve teaching, there is also a realization that the isolation teachers often face is not conducive to collaborative learning and improved teaching practices. In developing countries, many teachers, particularly those in rural areas teaching in multi-grade classrooms, often feel isolated and disconnected from their peers.

Developing countries have recently resorted to alternative models of teacher professional development, such as professional learning communities, or PLCs, to improve teaching quality and promote an approach to teacher development that is both social and contextual. PLCs have become so popular that many education systems in developing countries, as well as education development programs, include a PLC component as part of an overall professional development plan. PLCs enable teachers to cope with isolation, strengthening solidarity, camaraderie and teachers’ self-confidence as professionals. Although PLCs are in vogue and have been recently implemented in many Latin American and African countries, the concept of PLC originated in and has been applied and studied more widely in the context of developed countries, mainly the United States and United Kingdom.

In this post, we further define PLCs and review existing evidence on the effect of PLCs. We then outline FHI 360-funded research that we have initiated to study PLCs in three low- and middle-income countries: Equatorial Guinea, Ghana and Nigeria.

A mixed method research design for understanding how ICT tools assist climate change adaptation

 
Tags: , , , ,

Research conducted by Ericsson and the Earth Institute on the role of information and communication technology (ICT) in achieving the Sustainable Development Goals concludes that “every goal — from ending poverty and halting climate change to fighting injustice and inequality — can be positively impacted by ICT” (Ericsson, 2015). Projects utilizing ICT for climate change adaptation in developing countries indicate great potential for new technologies, such as mobile phones, and traditional technologies, such as radio broadcasts, to improve data gathering and dissemination of information on adaptation options (Ospina and Heeks, 2010). What is lacking, however, is evidence of the impact of combining multiple technologies with an institutional framework supporting the generation and dissemination of climate and agricultural information. An assessment of the use of multiple ICT — such as mobile phones, FM radio and community loudspeakers — combined with institutional arrangements to support ICT deployment is needed to better tailor the design of ICT interventions for climate change adaptation in developing countries.

In this post, we present the research design for an ongoing study of the Climate Change Adaptation and ICT (CHAI) project in Uganda. Our study investigates how the current approach of CHAI with its multiple ICT tools, institutional arrangements and local-to-national actors contributes to program impact. We intend for the findings of our study to inform the design of information and communication technologies for development (ICT4D) programs in the future.

New directions in portfolio reviews

 
Tags: , , , ,

A funder portfolio review is an evaluation of a set of programs or activities that make up a portfolio, typically defined by a sector or a place. Portfolio reviews take many forms, but the purpose is generally the same: to take stock and reflect on activities or investments in a particular area of programming. They are often requested by funders to answer straightforward questions about what’s working and what isn’t working in order to figure out what to do next. Portfolio reviews typically include a desk review of program documents and often include other data collection such as interviews and focus groups. Because funders use portfolio reviews to make strategic decisions about programmatic directions and resource allocations, innovations in this type of evaluation can bring large benefits. In this post, we briefly introduce two new directions for portfolio reviews.

What can we learn from fidelity of implementation monitoring models within early grade reading programs?

 
Tags: , , , , ,

Early grade reading programs have become a focus of significant investment in the international development community in recent years. These interventions often include similar components: the development of mother-tongue teaching and learning materials including structured teacher guides and pupil books; teacher professional development including in-service training, ongoing coaching, and professional learning communities; and community engagement around reading. The theory of change posits that, in combination, these components will lead to improved reading skills for pupils. However, this involves a certain leap of faith, because we don’t usually know what teachers do in their classrooms when the door is closed.

We believe the effectiveness of early grade reading programs requires a clear understanding of the extent to which these programs are implemented according to design at the classroom level. In other words, it requires a clear understanding of the fidelity of implementation (FOI) of the programs, to enable identification of gaps in programming and of steps to improve implementation. Currently, FOI monitoring is central to many early grade reading programs around the world, including smaller pilot programs, mid-sized interventions and programs at scale. The data is viewed as highly useful because it is so actionable – in fact, our experience has shown that governments are often very interested in integrating classroom-level FOI data into their own monitoring systems.

From designing our own FHI 360 FOI monitoring systems, it became clear that there are a number of different models with wide-ranging cost and sustainability implications. In this post, we provide an overview of FOI, describe the FOI monitoring models from two of our own early grade reading projects in Ghana and Nigeria, and outline a research study that aims to see what we could learn from them.

Addressing bias in our systematic review of STEM research

 
Tags: , , , , , , ,

Research is a conversation. Researchers attempt to answer a study question, and then other groups of researchers support, contest or expand on those findings. Over the years, this process produces a body of evidence representing the scientific community’s conversation on a given topic. But what did those research teams have to say? What did they determine is the answer to the question? How did they arrive at that answer?

That is where a systematic review enters the conversation. We know, for example, that a significant amount of research exists exploring gender differences in mathematics achievement, but it is unclear how girls’ math identity contributes to or ameliorates this disparity. In response, we are conducting a systematic review to understand how improving girls’ math identity supports their participation, engagement and achievement in math. This review will assist us in moving from a more subjective understanding of the issue to a rigorous and unbiased assessment of the current evidence to date.

Developing a systematic review protocol requires thoughtful decision-making about how to reduce various forms of bias at each stage of the process. Below we discuss some of the decisions made to reduce bias in our systematic review exploring girls’ math identity, in the hopes that it will inform others undertaking similar efforts.

Searching for social norms measures related to modern contraceptive use

 
Tags: , , ,

Social norms are a hot topic in development. While it seems intuitive that social norms can play a key role in influencing people’s behavior (in both negative and positive ways), the jury is still out on the best way for programs to address norms. We do know that in order to provide evidence on the power of norms change to improve behavior, we need effective ways to measure and monitor norms.

In 2017, as part of the USAID-funded Passages Project, we set out to systematically assess what empirical evidence exists around the relationship between social norms and use of modern family planning methods. You can read the results of our literature review in a new report published in Studies in Family Planning. In this blog post, we detail the methodology of our review and then provide recommendations for bringing greater consistency and comparability to social norm measures.

5 features of a monitoring, evaluation and learning system geared towards equity in education

 
Tags: , , , , ,

A great accomplishment arising from the era following 1990’s World Declaration on Education for All in Jomtein, Thailand, is recognition of the gender gap in education, and the mandate for sex-disaggregated reporting from funders and multilateral agencies. Data on the dramatic access and outcome disparities between male and female learners created demand for programming focused on gender inequity. Twenty-seven years after Jomtien, there is a substantial amount of evidence on solutions that build gender equity in education, and on how education systems need to adapt to help girls and boys overcome gender-related institutional barriers.

The Education Equity Research Initiative, a collaborative partnership led by FHI 360 and Save the Children, seeks to create the same dynamic around other aspects of inequity in education – be it poverty, ethnic or racial disadvantage, migration status, or disability. As a community, we create frameworks, modules, and tools, so that little by little the reams of data that get produced include a consistent set of questions around equity of program participation and equity of outcomes.

My previous blog post speaks to the need to be deliberate in building a monitoring, evaluation and learning system that generates the data and analysis that help answer the question: are we improving education equity through our programming and policy? But how do we operationalize equity in education, in the context of education development programming? In Mainstreaming Equity in Education, a paper commissioned by the International Education Funders Group, our collaborative begins by recognizing that an equity-oriented monitoring, evaluation and learning (MEL) system around a program or set of interventions has an essential purpose not just to produce data on scope and coverage, but to allow for depth of understanding around who benefits and doesn’t, and offer actionable information on what to do about it. Here I outline five features that describe such a learning system.

Beyond DALYs: New measures for a new age of development

 
Tags: , , ,

Integrated development is an approach that employs the design and delivery of programs across sectors to produce an amplified, lasting impact on people’s lives. Integrated programs are based on the premise that the interaction between interventions from multiple sectors will generate benefits beyond a stand-alone intervention. As human development interventions take this more holistic approach, funders and program implementers alike recognize the importance of understanding the impact of multi-sector interventions. While we can continue to use sector specific measures of impact – for instance, Disability-Adjusted Life Years (DALYS) – this creates an apples and oranges problem if one wishes to compare across interventions. It begs the question: can we move towards a single performance metric to assess effectiveness and cost-effectiveness of integrated programs? Here at FHI 360, we are attempting to answer this question by developing a new measurement tool – MIDAS (Measuring the Impact of Development Across Sectors).

In this post, we discuss the need for better measures, describe our conceptual framework, and then present some of the key components of the tool. We conclude by demonstrating how the tool worked when piloted in a current FHI 360 project.

From the Red Book to the Blue Book: Advancing HIV surveillance among key populations

 
Tags: , , ,

The World Health Organization (WHO) recently published new global guidelines on how to do biobehavioral surveys related to HIV infection in a document known as the Blue Book. The Blue Book replaces the previous Red Book of guidelines for such surveys. The updated guidance keeps the global health community abreast of the evolving HIV epidemic, which has led to 37 million people currently living with HIV infection. Biobehavioral surveys provide population-level estimates for the burden of HIV disease and HIV-related risk factors, and they allow estimation of the coverage of prevention and treatment services for key populations that are at increased risk for HIV. Advances in available data and changes in the epidemic rendered the survey tools and guidelines in the Red Book out-of-date. In this blog post, we’re going to highlight how the new Blue Book addresses these critical gaps to deliver a manual better suited to the era of ending AIDS.

How fast is rapid? What I learned about rapid reviews at Global Evidence Summit

 
Tags: , ,

On the first day of the Global Evidence Summit (GES) in Cape Town, South Africa, in spite of jet lag and conference exhaustion, I eagerly attended the late afternoon session titled, “A panoramic view of rapid reviews: Uses and perspectives from global collaborations and networks.” During my time working at the International Initiative for Impact Evaluation (3ie), I was converted into a true believer in systematic reviews. Before my conversion I knew that literature reviews suffered from bias due to a researcher’s selection of studies to include, but I was less aware of the established methods for conducting unbiased (well, more accurately, less biased) reviews. Systematic reviews were the answer! As I worked with and on systematic reviews, however, I became frustrated to see how much time and how many resources they can take. I was keen to learn more about rapid reviews. This session provided a great overview of rapid review approaches and some of the recent advances.

5 lessons for using youth video diaries for monitoring and evaluation

 
Tags: , , , ,

How do you measure the process of change that young people undergo as they engage with a program as part of that program’s monitoring and evaluation (M&E)? The Sharekna project in Tunisia uses youth video diaries to gain insight into the transformations that youth make as they develop resilience against external stresses like violent extremism. In this blog post, I provide five lessons from our Sharekna project to guide future M&E and research activities using or considering the use of youth video diaries.

Exploring the parameters of “it depends” for estimating the rate of data saturation in qualitative inquiry

 
Tags: , ,

In an earlier blog post on sample sizes for qualitative inquiry, we discussed the concept of data saturation – the point at which no new information or themes are observed in the data – and how researchers and evaluators often use it as a guideline when designing a study.

In the same post, we provided empirical data from several methodological studies as a starting point for sample size recommendations. We simultaneously qualified our recommendations with the important observation that each research and evaluation context is unique, and that the speed at which data saturation is reached depends on a number of factors. In this post, we explore a little further this “it depends” qualification by outlining five research parameters that most commonly affect how quickly/slowly data saturation is achieved in qualitative inquiry.

Don’t spin the bottle, please: Challenges in the implementation of probability sampling designs in the field, Part II

 
Tags: ,

In part I, I reviewed how the principles of probability samples apply to household sampling for valid (inferential) statistical analysis and focused on the challenges faced in selecting enumeration areas (EAs). When we want a probability sample of households, multi-stage sampling approaches are usually used, where EAs are selected in the first stage of sampling and households then selected from the sampled EAs in the second stage. (Additional stages may be added if deemed necessary.) In this post, I move on to the selection of households within the sampled EAs. I’ll focus on the sampling principles, challenges, approaches and recommendations.

Sample size is not king: Challenges in the implementation of probability sampling designs in the field, Part I

 
Tags: ,

So you want a probability sample of households to measure their level of economic vulnerability; or to evaluate the HIV knowledge, attitudes, and risk behaviors of teenagers; or to understand how people use health services such as antenatal care; or to estimate the prevalence of stunting or the prevalence and incidence of HIV. You know that probability samples are needed for valid (inferential) statistical analysis. But you may ask, what does it take to obtain a rigorous probability sample?

Turning lemons into lemonade, and then drinking it: Rigorous evaluation under challenging conditions

 
Tags: , , , ,

In early 2014, USAID came to the ASPIRES project with a challenge. They requested that our research team design and implement a prospective quantitative evaluation of a USAID-funded intervention in Mozambique. The intervention centered on a combined social and economic intervention for girls at high risk of HIV infection. As a research-heavy USAID project focused on the integration of household economic strengthening and HIV prevention/treatment, ASPIRES was prepared for the task.

The challenges, however, came in the particulars of the evaluation scenario. The research team set its mind to identifying the best possible design to fulfill the request. This is to say, we sought out a recipe for lemonade amidst these somewhat lemony conditions.

Riddle me this: How many interviews (or focus groups) are enough?

 
Tags: ,

The first two posts in this series describe commonly used research sampling strategies and provide some guidance on how to choose from this range of sampling methods. Here we delve further into the sampling world and address sample sizes for qualitative research and evaluation projects. Specifically, we address the often-asked question: How many in-depth interviews/focus groups do I need to conduct for my study?

A pathway for sampling success

 
Tags: ,

The credibility and usefulness of our research and evaluation findings are inextricably connected to how we select participants. And, let’s admit it, for many of us, the process of choosing a sampling strategy can be drier than a vermouth-free martini, without any of the fun. So, rather than droning on comparing this or that sampling strategy, we present a relatively simple sampling decision tree.

Gearing up to address attrition: Cohort designs with longitudinal data

 
Tags: , , , , ,

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Learning about focus groups from an RCT

 
Tags: , , ,

In my previous job at 3ie I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. use an RCT to compare the performance of individual interviews against focus groups for collecting certain data.

Emojis convey language, why not a sampling lesson too?

 
Tags: ,

To help folks build stronger sampling plans for their research and evaluation projects, we present a series of three sampling posts. This first blog post explains sampling terminology and describes the most common sampling approaches using emoji-themed graphics. Ready to get started? Sit down, hold on to your hats and glasses, and enjoy the sampling ride!