What can we learn from fidelity of implementation monitoring models within early grade reading programs?

Early grade reading programs have become a focus of significant investment in the international development community in recent years. These interventions often include similar components: the development of mother-tongue teaching and learning materials including structured teacher guides and pupil books; teacher professional development including in-service training, ongoing coaching, and professional learning communities; and community engagement around reading. The theory of change posits that, in combination, these components will lead to improved reading skills for pupils. However, this involves a certain leap of faith, because we don’t usually know what teachers do in their classrooms when the door is closed.

We believe the effectiveness of early grade reading programs requires a clear understanding of the extent to which these programs are implemented according to design at the classroom level.
We believe the effectiveness of early grade reading programs requires a clear understanding of the extent to which these programs are implemented according to design at the classroom level. In other words, it requires a clear understanding of the fidelity of implementation (FOI) of the programs, to enable identification of gaps in programming and of steps to improve implementation. Currently, FOI monitoring is central to many early grade reading programs around the world, including smaller pilot programs, mid-sized interventions and programs at scale. The data is viewed as highly useful because it is so actionable – in fact, our experience has shown that governments are often very interested in integrating classroom-level FOI data into their own monitoring systems.

From designing our own FHI 360 FOI monitoring systems, it became clear that there are a number of different models with wide-ranging cost and sustainability implications. In this post, we provide an overview of FOI, describe the FOI monitoring models from two of our own early grade reading projects in Ghana and Nigeria, and outline a research study that aims to see what we could learn from them.

What is fidelity of implementation?
Fidelity of implementation (FOI) refers to “the degree to which an intervention or program is delivered as intended.”
Fidelity of implementation (FOI) refers to “the degree to which an intervention or program is delivered as intended” (Carroll, et al., 2007). Other terms are also used in the literature, such as treatment integrity, treatment fidelity or intervention fidelity, but overall they refer to the same conceptual space.

Monitoring FOI is key to understanding intervention effectiveness. Attainment of an intervention’s intended outcomes may be attributed to the degree to which the intervention was implemented as designed. In that sense, FOI can act as a potential moderator of the relationship between the intervention design and the planned results/outcomes. Figure 1 below highlights these relationships. If a program does not demonstrate impact, an FOI analysis can determine whether the lack of impact is due to poor implementation of the program design or to a flawed design of the program. Carroll, et al. (2007) note that evaluating FOI provides clarity on whether positive outcomes that result from an intervention can further be enhanced with the improvement of implementation fidelity.

Figure 1: Fidelity of implementation as a moderator between project design and outcomes

Figure 1: Fidelity of implementation as a moderator between project design and outcomes

Monitoring fidelity during the course of a program is also important for adaptive management. If the intervention or part of the intervention is not being delivered as designed, the project team can take corrective measures to improve FOI. In addition, simply monitoring fidelity during the implementation of the program has the potential to improve fidelity itself. When implementers know they are being monitored, they may be more likely to follow implementation procedures.

Four elements of implementation fidelity that can be measured

The literature highlights four main elements of implementation fidelity that can be measured. 1) adherence, 2) duration/exposure, 3) quality of delivery and 4) participant responsiveness (Breitenstein, et al., 2010; Carroll, et al., 2007; Schulte, et al., 2009; and Mihalic, 2004). Adherence and duration/exposure are core elements of FOI. Quality of delivery and participant responsiveness, on the other hand, can be considered add-on elements recommended by the literature for a comprehensive FOI monitoring system. These elements are also often measured in the context of process evaluations.

With a greater focus on intervention effectiveness comes an increased need to develop strategies for monitoring fidelity of implementation.
With a greater focus on intervention effectiveness comes an increased need to develop strategies for monitoring fidelity of implementation. In the context of early grade reading programs, monitoring FOI most often involves tracking teacher behavior against protocols designed to improve their instructional effectiveness. In other words, FOI monitoring systems track teachers’ adherence to structured instructional materials (such as scripted lesson plans) developed by the project) and, to some extent, dosage of these instructional materials (in terms of the frequency of exposure received by students).

Fidelity of implementation monitoring models

The measurement of the four key elements described above are included in the FOI monitoring models of two FHI 360-implemented early grade reading projects: the UNICEF-funded Reading and Numeracy Activity (RANA) in Nigeria and the USAID Partnership for Education’s Learning Activity (Learning) in Ghana. For both projects, we monitor 1) FOI of the teaching and learning materials designed by the project using a classroom observation tool and 2) FOI of the continuous professional development model designed by the project through teacher and head teacher interviews.

Below we provide a comparison of the fidelity of implementation monitoring models of the two projects.

Nigeria’s RANA FOI monitoring model

The RANA program hires faculty members from the local colleges of education to serve as master trainers. Master trainers train teachers, provide ongoing coaching to teachers and monitor FOI. The selection of these trainers depends on the perception of their capacity to provide high-quality teacher professional development. As a sustainability strategy, the program also trains government school support officers alongside trainers, with the plan that the officers could continue coaching and monitoring once program funding ends. These trainers and officers independently visit all project schools once per month to carry out a school support visit. Trainers and officers observe either a Grade 1, 2 or 3 class using the FOI classroom observation tool during the visits.

Immediately after the lesson observation, trainers and officers hold a coaching meeting with the teacher, providing direct feedback on how they can improve their teaching and use of the materials based on what they saw during the observation. They also ask the teachers about the support they have received from the lead teacher that serves as the school-based coach, and their participation in weekly professional learning communities. The data are later uploaded into our FHI 360 server by the trainers and officers.

For each verified school support visit, trainers are paid for the coaching visit and provided a travel stipend. Since the officers are already paid by the government to provide coaching, the program only pays a travel stipend for each verified visit. Therefore, the total payment to officers per school is one-third of that provided to trainers.

Ghana’s Learning FOI monitoring model

For Learning, government-hired circuit supervisors visit all project schools twice per school term to observe Kindergarten 2 and Grade 1 classes using the FOI classroom observation tool. At each school visit, circuit supervisors also interview the observed teachers and the head teacher on the extent to which teachers are receiving support through school-based coaching and professional learning communities.

Based on what we learn in Ghana and Nigeria, we will develop a set of additional research questions and recommended practices for monitoring the FOI of early grade reading programs.
Learning does not use circuit supervisors as teacher coaches. Instead, head teachers or school-based curriculum leads serve as the teacher coaches. Circuit supervisors interact with head teachers and curriculum leads, but not with teachers. They also interact with project-hired district teacher support team (DTST) members who provide support to head teachers and curriculum leads in coaching. The DTST members also visit schools twice per school term and use their own lesson observation and interview tools to monitor the project. Circuit supervisors receive a travel stipend for every school visit.

Research project on fidelity of implementation

The similarities and differences across these two FOI monitoring approaches allow us to conduct qualitative research on stakeholder perceptions of what should be monitored through an FOI system, advantages and disadvantages of different types of enumerators and different levels of institutionalization, and data utilization at different levels. Because we have large FOI data sets at our disposal from these two projects, we will also analyze differences in data quality with different types of enumerators in order to better understand the pros and cons of our approaches. Based on what we learn, we will develop a set of additional research questions and recommended practices for monitoring the FOI of early grade reading programs. In future posts we will discuss the research methodology for this study and we will also share our findings and recommendations.

Photo credit: Kunle Lawal/FHI 360

Sharing is caring!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.