Unpacking PLCs: What evidence do we have about professional learning communities and how can we produce more?

 
Tags: , , ,

Improving teaching quality to ensure that all children in school obtain the skills and knowledge they are meant to acquire has become a key objective internationally. Together with the growing recognition of the need to improve teaching, there is also a realization that the isolation teachers often face is not conducive to collaborative learning and improved teaching practices. In developing countries, many teachers, particularly those in rural areas teaching in multi-grade classrooms, often feel isolated and disconnected from their peers.

Developing countries have recently resorted to alternative models of teacher professional development, such as professional learning communities, or PLCs, to improve teaching quality and promote an approach to teacher development that is both social and contextual. PLCs have become so popular that many education systems in developing countries, as well as education development programs, include a PLC component as part of an overall professional development plan. PLCs enable teachers to cope with isolation, strengthening solidarity, camaraderie and teachers’ self-confidence as professionals. Although PLCs are in vogue and have been recently implemented in many Latin American and African countries, the concept of PLC originated in and has been applied and studied more widely in the context of developed countries, mainly the United States and United Kingdom.

In this post, we further define PLCs and review existing evidence on the effect of PLCs. We then outline FHI 360-funded research that we have initiated to study PLCs in three low- and middle-income countries: Equatorial Guinea, Ghana and Nigeria.

What can we learn from fidelity of implementation monitoring models within early grade reading programs?

 
Tags: , , , , ,

Early grade reading programs have become a focus of significant investment in the international development community in recent years. These interventions often include similar components: the development of mother-tongue teaching and learning materials including structured teacher guides and pupil books; teacher professional development including in-service training, ongoing coaching, and professional learning communities; and community engagement around reading. The theory of change posits that, in combination, these components will lead to improved reading skills for pupils. However, this involves a certain leap of faith, because we don’t usually know what teachers do in their classrooms when the door is closed.

We believe the effectiveness of early grade reading programs requires a clear understanding of the extent to which these programs are implemented according to design at the classroom level. In other words, it requires a clear understanding of the fidelity of implementation (FOI) of the programs, to enable identification of gaps in programming and of steps to improve implementation. Currently, FOI monitoring is central to many early grade reading programs around the world, including smaller pilot programs, mid-sized interventions and programs at scale. The data is viewed as highly useful because it is so actionable – in fact, our experience has shown that governments are often very interested in integrating classroom-level FOI data into their own monitoring systems.

From designing our own FHI 360 FOI monitoring systems, it became clear that there are a number of different models with wide-ranging cost and sustainability implications. In this post, we provide an overview of FOI, describe the FOI monitoring models from two of our own early grade reading projects in Ghana and Nigeria, and outline a research study that aims to see what we could learn from them.

Addressing bias in our systematic review of STEM research

 
Tags: , , , , , , ,

Research is a conversation. Researchers attempt to answer a study question, and then other groups of researchers support, contest or expand on those findings. Over the years, this process produces a body of evidence representing the scientific community’s conversation on a given topic. But what did those research teams have to say? What did they determine is the answer to the question? How did they arrive at that answer?

That is where a systematic review enters the conversation. We know, for example, that a significant amount of research exists exploring gender differences in mathematics achievement, but it is unclear how girls’ math identity contributes to or ameliorates this disparity. In response, we are conducting a systematic review to understand how improving girls’ math identity supports their participation, engagement and achievement in math. This review will assist us in moving from a more subjective understanding of the issue to a rigorous and unbiased assessment of the current evidence to date.

Developing a systematic review protocol requires thoughtful decision-making about how to reduce various forms of bias at each stage of the process. Below we discuss some of the decisions made to reduce bias in our systematic review exploring girls’ math identity, in the hopes that it will inform others undertaking similar efforts.

Gearing up to enhance college readiness for underserved students: Insights from a capacity building workshop

 
Tags: , , , ,

I was fortunate to attend the National Council for Community and Education Partnerships (NCCEP)/ Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) 2018 Capacity Building Workshop in Las Vegas, Nevada. The three-day workshop brought together GEAR UP community partners, school and district administrators, and researchers from across the country for professional learning.

Each day of the workshop began with a keynote address. Linda Cliatt-Wayman, principal and renowned school-turnaround expert, spoke of her motivational story to transform Philadelphia schools on the Persistently Dangerous List. Greg Simon, President of the Biden Cancer Initiative, stressed the importance of communication, shared data, and coming together to share practices and strategies. Natalie Spiro, Founder and President of Drum Café West Coast, led the audience through an interactive drumming performance illustrating GEAR UP’s collective voice and unity of purpose. Even an Elvis impersonator came to liven things up!

But what I found most informative and inspiring were the seminars after each keynote. It was during these sessions that peers from various disciplines from across the country could discuss their successes and difficulties, share strategies and practices, and more importantly share alternate perspectives on common issues and topics. In this post, I share some of my personal insights from those discussions.

Strengthening capacity to track, store and use data effectively at Catholic and independent schools

 
Tags: , , ,

I’ll be the first to confess that my research most often focuses on the public school system, with the inherent but weak assumption that strong practices and strategies in public schools can loosely apply to non-public schools (i.e., Catholic and independent schools). But as an education researcher, I’m intimately aware that setting and populations matter and just because an intervention works in a public school doesn’t mean it will work in a nearby independent school.

So, I was pleasantly surprised when I and a few of my colleagues recently had the opportunity to work with a network of Catholic and independent schools in Minnesota to 1) support the integration of assessment data into ongoing school improvement processes, and 2) promote best practices in data collection, assessment and use. I was excited to be able to directly apply my experience and learn about their data issues and concerns. Below I highlight some of my lessons learned in working with the schools and across the network, in the hopes that it will assist other Catholic and independent schools as they seek to use data to inform their practice.

Investigating STEM and the importance of girls’ math identity

 
Tags: , , , , , , , ,

Despite significant progress in closing the gender gap in science, technology, engineering and math (also known as STEM), inequities in girls’ and women’s participation and persistence in math and across STEM education and careers remain. According to the U.S. Census Bureau, women make up nearly half of the U.S. workforce but just 26 percent of STEM workers, as of 2011. Within STEM, the largest number of new jobs are in the computer science and math fields; however, the gender gap in these careers has increased rather than decreased, with female representation decreasing since 2000.

While much of the current STEM research has focused heavily on the barriers and reasons why there aren’t more girls or women in STEM-related fields, here we argue that future research must focus on how to design and develop effective approaches, practices, situations, tools, and materials to foster girls’ interest and engagement.

5 features of a monitoring, evaluation and learning system geared towards equity in education

 
Tags: , , , , ,

A great accomplishment arising from the era following 1990’s World Declaration on Education for All in Jomtein, Thailand, is recognition of the gender gap in education, and the mandate for sex-disaggregated reporting from funders and multilateral agencies. Data on the dramatic access and outcome disparities between male and female learners created demand for programming focused on gender inequity. Twenty-seven years after Jomtien, there is a substantial amount of evidence on solutions that build gender equity in education, and on how education systems need to adapt to help girls and boys overcome gender-related institutional barriers.

The Education Equity Research Initiative, a collaborative partnership led by FHI 360 and Save the Children, seeks to create the same dynamic around other aspects of inequity in education – be it poverty, ethnic or racial disadvantage, migration status, or disability. As a community, we create frameworks, modules, and tools, so that little by little the reams of data that get produced include a consistent set of questions around equity of program participation and equity of outcomes.

My previous blog post speaks to the need to be deliberate in building a monitoring, evaluation and learning system that generates the data and analysis that help answer the question: are we improving education equity through our programming and policy? But how do we operationalize equity in education, in the context of education development programming? In Mainstreaming Equity in Education, a paper commissioned by the International Education Funders Group, our collaborative begins by recognizing that an equity-oriented monitoring, evaluation and learning (MEL) system around a program or set of interventions has an essential purpose not just to produce data on scope and coverage, but to allow for depth of understanding around who benefits and doesn’t, and offer actionable information on what to do about it. Here I outline five features that describe such a learning system.

To achieve equity in education, we need the right data

 
Tags: , , , ,

As we work to realize the Sustainable Development Goals (SDGs) related to education, it is the responsibility of every funding, implementing and research organization internationally to be asking questions about our own contributions to building equity in education. While a great amount of data gets produced in the course of education projects, only a fraction provides the detail that is needed to assess intervention impact on different equity dimensions. At the technical and implementation level, organizations need to capture and use the necessary evidence to understand and respond to inequity in education provision and outcomes.

To do that, we need to be deliberate in building monitoring, evaluation and learning systems that generate the data and analysis that help answer the question: are we improving education equity through our programming and policy? Disaggregated data are the first step to understanding who is left behind in obtaining a quality education for successful and productive adulthood. My recent paper, Mainstreaming Equity in Education, outlines key issues and challenges that need to be addressed around equity in education, and provides a way forward for mainstreaming equity-oriented programming and data analysis. In this blog post, I show how disaggregated data can make a difference to understanding impacts. I then provide evidence that, unfortunately, such disaggregated data are rarely collected.

Paper-based data collection: Moving backwards or expanding the arsenal?

 
Tags: , , ,

Considerable effort has gone into perfecting the art of tablet data collection, which is the method typically used to collect data for evaluating education programs. The move away from paper has been a welcome shift, as for many research and evaluation professionals, paper conjures images of junior staff buried under boxes of returned questionnaires manually entering data into computers. Indeed, when our team recently began experimenting with paper-based data collection in our education projects, one colleague with decades of experience remarked warily, “It just seems like we’re moving backwards here!”

Improvements in the software, however, allow us to merge new technology with “old school” methods. Digital scanners can now replace manual data entry, powered by software that is able to read completed questionnaires, and quickly format responses into a data set for subsequent analysis. Our team has been experimenting with a new digital scanning software called Gravic to easily and quickly enter data from paper-based surveys. The Gravic digital scanning tool introduces flexibility and opens a new option for data collection across our projects, but not without some drawbacks. In this post, we make the case for paper surveys combined with the Gravic software and then review the drawbacks.

Don’t waste evidence on the youth! Recent data highlights education and employment trends

 
Tags: , , ,

A recent New York Times article describes a major contemporary challenge facing governments: the world has too many young people. A quarter of the world’s population is young (ages 10-24), and the majority live in developing countries. Policy makers are struggling with high levels of youth unemployment in every country, but a key challenge in developing countries has been a lack of data on education and employment characteristics. To fill this evidence gap, FHI 360’s Education Policy and Data Center (EPDC) recently added country-level Youth Education and Employment profiles to the resources available on our website. In this post, I describe the data and how they were collected, and I give some examples of how these data can be used to inform policy making and program design.

7 takeaways from changes in US education grant programs

 
Tags: , , ,

I recently had the opportunity to attend a workshop on the U.S. Department of Education’s (ED) new Education Innovation and Research (EIR) grant competition. EIR is the successor to the Investing in Innovation (i3) grant program, which invested approximately $1.4 billion through seven competitions from 2010 to 2016 to develop, validate and scale-up evidence-based programs in education. Like i3, EIR implements a tiered award structure to support programs at various levels of development. This blog post summarizes my seven takeaway points from the workshop. These seven points highlight the main changes in the transition from i3 to EIR.

Gearing up to address attrition: Cohort designs with longitudinal data

 
Tags: , , , , ,

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Null results should produce answers, not excuses

 
Tags: , , ,

I recently served on a Center for Global Development (CGD) panel to discuss a new study of the effects of community-based education on learning outcomes in Afghanistan. (Burde, Middleton, and Samii 2016) This exemplary randomized evaluation finds some important positive results. But the authors do one thing in the study that almost all impact evaluation researchers do – where they have null results, they make, what I call for the sake of argument, excuses.