Implementation research: The unambiguous cornerstone of implementation science

Tags: , ,

By: Theresa Hoke

Three years ago, I wrote that there was no clear consensus on the definition of implementation science in global health. Today we are no closer to agreement. In 2015, Thomas Odeny and colleagues published a review that showed 73 unique definitions, and the Implementing Best Practices Initiative conducted a survey that showed no consensus on a definition across 27 international organizations. To confuse matters more, the term is used interchangeably with implementation research; operations research; monitoring, evaluation and learning; real-world research; and other non-research approaches focused on refining implementation strategies.

Implementation research is the heart and soul of implementation science.
Since implementation science is defined so broadly and because no one can agree on a definition, what is the key to making sense of it all? In my work, I focus on the most well-defined, understandable sub-domain of implementation science: implementation research. I argue that it’s the heart and soul of implementation science. In this blog post, I’ll define implementation research and outline why it makes for such a useful concept.

Definition of implementation research

Implementation research is a sub-domain of implementation science with great consistency and clarity. With the launch of the Implementation Science journal in 2006, implementation research was defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services and care.”

Guidance offered by WHO’s Alliance for Health Policy and Systems Research (AHPSR) offers a similar description of implementation research: “the scientific study of the processes used in the implementation of initiatives as well as the contextual factors that affect these processes.” AHPSR suggests that a major purpose of implementation research is “to support and promote the successful application of interventions that have been demonstrated to be effective.”

Other authorities – such as research sponsors at the U.S. National Institutes of Health and the experts who literally wrote the book – all offer consistent guidance about the distinguishing characteristics of implementation research. To consider the true meaning of implementation research, let’s consider the features that make it a distinct research discipline.

Distinguishing features of implementation research


The anticipated practical application of implementation research results should be clear, beginning with study design.

Implementation research produces evidence to guide decision making about incorporating evidence-based interventions into service delivery delivered at broad scale under routine conditions. The anticipated practical application of implementation research results should be clear, beginning with study design. As described by David Peters and colleagues, strong implementation research questions are formulated in consultation with the decision makers intended to use the results, to ensure the study responds to priority evidence needs.

Theoretical models

Strong implementation research studies are grounded in a conceptual framework that guides lines of inquiry and proposes a theory of change.

Like most rigorous investigations, strong implementation research studies are grounded in a conceptual framework that guides lines of inquiry and proposes a theory of change. Since the advent of implementation research, investigators have advanced a host of theoretical models specific to the discipline. In 2009, Laura Damschroder and colleagues published a consolidated framework intended to harmonize the terminology used across a host of theoretical models and incorporate all essential elements (known as theoretical constructs) in a single framework. The Consolidated Framework for Implementation Research (CFIR) has been widely applied, so much so that last year Kirk and colleagues published a systematic review of CFIR application in implementation research. That review is an excellent resource for learning more about how theory is applied in implementation research.


Implementation research is typically conducted in real-world service delivery contexts, with actual program actors, like patients and health care providers, serving as study participants. Acknowledging the heterogeneity of the real world, context is often a focus of inquiry in implementation research: investigators examine how interventions are adapted to specific contexts and how context influences intervention effectiveness. In their 2014 paper, Edwards and Barker share an experience conducting research on Prevention of Mother-to-Child Transmission (PMTCT) of HIV services in South Africa. These authors illustrate the power of implementation research to cope with the challenges of testing interventions in unpredictable settings and to generate findings predicting how an innovation will perform in similarly trying settings.

Study design and methods

Implementation research study designs depend on the purpose of the investigation.

Implementation research study designs depend on the purpose of the investigation, roughly falling into three main categories: formative research to shape implementation strategies; process evaluation to examine the complexities of implementation strategies; and trials to test implemented interventions. Studies commonly apply mixed methods, i.e., a combination of quantitative and qualitative techniques. In that investigations consider a host of factors potentially influencing intervention success, studies are often multi-disciplinary, combining methods from fields like health services research, anthropology, psychology, engineering, education, clinical science, and economics. AHPSR’s practical guidance provides an excellent overview of methods used in implementation research. The authors assert that “the question is king” – implying that investigators select a combination of research methods based on what decision makers want to know. This contrasts with other types of research, where the questions that can be answered are limited by the type of study undertaken.


Implementation research goes beyond examining effectiveness by investigating the active ingredients of interventions and implementation strategies.
One of implementation research’s defining features is the way it goes beyond examining effectiveness (Did the intervention work?) by investigating the active ingredients of interventions and implementation strategies. Both are examined in terms of implementation outcomes including acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability. Enola Proctor and colleagues offer a clear description of these terms. The authors remind us that implementation research is a relatively new field; intentional efforts are still needed to refine definitions and techniques for measuring implementation outcomes.

Examples of implementation research

To illustrate the above features in practice, I offer two research examples.

First, the PRIME study (Promoting Retention among Infants and Mothers Effectively) is an example of an implementation research study placing important focus on implementation outcomes. Using mixed methods to examine a range of implementation outcomes, this 3-arm cluster-randomized intervention study was implemented in 30 primary health facilities in Malawi (10 facilities per arm) to test two new strategies for retaining HIV-positive mother-infant pairs in care. Complementing the cluster-randomized control trial, a process evaluation examined the integrity of the intervention.

The main trial results indicate that neither of the tested innovations improve maternal-infant retention in rural Malawi. Results from the process evaluation reveal several ways that actual intervention implementation deviate from design, along with the factors explaining these problems with intervention fidelity. This examination of the implementation process helped the investigators derive essential lessons about the need for designing service delivery innovations in consultation with the providers expected to implement them, with careful consideration of the resources and limitations of the implementation context.

The STRETCH trial embodies several essential elements of strong implementation research.
Second, the STRETCH trial (Streamlining Tasks and Roles to Expand Treatment and Care for HIV) is one of my favorite examples of global health implementation research. This mixed methods study, conducted in Free State, South Africa, tested an intervention for incorporating HIV care and treatment services into routine primary care by shifting responsibilities from physicians to nurses. The investigation embodied several essential elements of strong implementation research.

First, the investigators designed the intervention by incorporating components proven to be effective through prior research and adapting them to a new context in consultation with health service managers. Second, the trial employed a pragmatic cluster-randomized design, implemented in 31 public sector clinics, relying on the actions of real-world health personnel serving real-world patients. Further, the study included a multi-component qualitative process evaluation to document the implementation processes, assess the fidelity of intervention implementation, explore how the intervention interacted with context, and identify critical elements influencing intervention success.

Finally, the study was conducted with the aim of providing results to the provincial health department to guide decisions about whether nurses should be authorized to manage HIV patients. With the intervention proving to be effective, the trial’s diverse set of results on implementation outcomes offered essential guidance supporting roll-out of South Africa’s national policy change authorizing primary care nurses to initiate antiretroviral therapy and to re-prescribe.

The above overview illustrates that implementation research – while still in its youth – is an established discipline with distinct purposes, approaches, theories and techniques. Those of us aiming to conduct implementation research should continuously strive to deepen our understanding of the field’s scope and approaches. The global health field will be able to take full advantage of this powerful discipline to resolve implementation challenges when there’s more of a shared understanding of what implementation research offers.

Photo credit: FHI 360

Sharing is caring!

5 Responses to "Implementation research: The unambiguous cornerstone of implementation science"
  1. Alejandro Paredes says:


    Great article so congrats. I like the way you describe implementation research as the soul of the implementation science. I have a few questions:

    1. Do you know a theory of change that has a systemic approach?
    2. You state that strong implementation research studies are grounded in a conceptual framework. What if the conceptual frameworks are wrong or mislead you?
    3. What is the state of art in theories of change?


    Alejandro Paredes

    • Theresa Hoke says:

      Alejandro, thanks for your interest in this topic!

      Concerning your questions on theories of change, one of the best resources I’ve found was prepared by Isabel Vogel for the UK Department for International Development, “Review of the use of ‘Theory of Change’ in
      international development.” That’s the first place I’d look for examples of ToC with a systemic approach.

      Another helpful resource is Erica Breuer’s “Using theory of change to design and evaluate public health interventions: a systematic review,” Implementation Science (2016) 11:63.

      As for choosing a theoretical model that leads investigators down the wrong path, you’re raising an important point! Birken and colleagues recently published results of a survey conducted with self-identified implementation scientists. (Implementation Science (2017) 12:124) The survey explored criteria used in selecting implementation science theories and frameworks. The investigators found that the criteria used by research teams in choosing a theoretical framework often are not clear; convenience or prior exposure are often key factors influencing choice. They write that the consequence of not being more thoughtful in selection of the theoretical framework is that it does not provide appropriate guidance for the study (for intervention design and topics of inquiry, for example). Excessive reliance on the same familiar theories also limits the generalizability of findings, the authors assert. I recommend checking out the Birken paper to read more about this.

      Thanks for raising these issues.

      • Alejandro says:


        Thank you for your reply but now I have additional questions:

        1. Does FHI 360 uses a particular ToC? Also, does FHI 360 has elaborated its own ToC? Does FHI 360 has a Think Tank to produce this products or it´s more of a department or individual initiative?
        2. Does USAID has a particular interest or inclination towards a ToC?
        3. You wrote the following “In that investigations consider a host of factors potentially influencing intervention success, studies are often multi-disciplinary, combining methods from fields like health services research, anthropology, psychology, engineering, education, clinical science, and economics. ” But what I have seen are more of “silos investigations” which is linear thinking in education or workforce, health o economic growth, etc. Can you expand your thoughts on this?



  2. Theresa Hoke says:

    Just in the past 5 years I’ve noticed greater reliance on theories of change in global health research. ToCs are developed on a case-by-case basis, adapting to project aims, implementation strategies, context, and assumptions about what must happen to achieve success. At FHI 360, colleagues working in a range of research units, as well as our Research Utilization division, have gained experience developing project-specific ToC models, including for USAID-funded projects. I’d be glad to hear from colleagues working on service delivery projects about their experiences with ToC models.

    As for research being conducted in silos, you raise an important point. Moving beyond traditional single-sector thinking has become a high priority in the field of global health research, with increased recognition of the diverse factors influencing health outcomes. The SDGs have provided us considerable inspiration! FHI 360 researchers join others in our field conducting research to discover, test, and support the scale-up of multi-sectoral solutions to development challenges. Check out some projects and resources found here:

  3. Murray says:

    I am currently doing a PhD on the quality of project implementation (within international development) and it’s good to have a continued conversation of implementation. One of the challenges I see is that researchers, academics and practitioners, have largely talked around ‘implementation’. Implementation quickly drifts into “implementation research”, “implementation science” or “implementation monitoring”. For whatever reason, there seems to be an aversion about what is implementation. (Case in point: The OCED glossary of development terminology is basically the de-facto industry glossary – yet it does not include implementation…that absence say something). I think by having a better comprehension of what implementation is, then the other contextualizations will become easier to apply.

    A second observation is that there are intersecting discourse across implementation science, improvement science, quality improvement, evidence-based practice, etc. While each of these may have different insights, I wonder how much is simply a different perspective rather than a different discourse. I’ll leave that for for the next PhD…

Leave a Reply

Your email address will not be published. Required fields are marked *