We should be able to evaluate interventions for the disabled

According to the United States Agency for International Development (USAID) there are 1 billion people in the world who live with some form of disability, and 80 percent of them live in low- and middle-income countries (source here). Consideration of disabilities in international development is not new. USAID published its Disability Policy Paper in 1997, and the United Nations ratified the Convention on the Rights of Persons with Disabilities in 2007. Fortunately, attention to this issue has increased in recent years with the launch in 2014 of the UK Department for International Development’s (UK DFID) Disability Framework. At USAID’s recent Democracy, Human Rights and Governance Partners Forum, disability inclusion was a hot topic, and this month UK DFID along with the International Disability Alliance and the Government of Kenya are co-hosting the first-ever Global Disability Summit. The purpose of the summit is to “galvanize the global effort to address disability inclusion in the poorest countries in the world.” Earlier this year, UK DFID solicited proposals for a large disability inclusive development program to implement interventions benefiting people with disabilities in several countries.

Using the 3ie systematic reviews database I explored the current synthesized evidence base for disability interventions with a particular view to learning about impact evaluation designs.

Funding a disability inclusive development program begs the question, what interventions work to improve the lives of people living with disabilities in low- and middle-income countries? Indeed UK DFID recognizes that the evidence base for this question is limited, and is also funding a research program to produce randomized controlled trial (RCT) studies of disability inclusive interventions. In the meantime, what evidence do we have?

Using the 3ie systematic reviews database I explored the current synthesized evidence base for disability interventions with a particular view to learning about impact evaluation designs. I searched using the “differently-abled” option in the equity focus field (three cheers for 3ie for having this search term!) and got seven hits. Of those, three are titles or protocols, that is, reviews not yet completed. Here are summaries of the four completed systematic reviews.

In “Community-based rehabilitation for people with disabilities in low- and middle-income countries,” Iemmi, et al. (2015) review 15 included studies, six looking at interventions for physical disabilities and nine looking at interventions for mental disabilities. 3ie rates this review as being high-quality, and Iemmi, et al.’s included studies include some RCTs as well as some quasi-experimental designs. All studies needed to have a comparison group. The authors conclude that the RCTs overall find a “beneficial effect” of community-based rehabilitation for people with physical disabilities and a “modest beneficial effect” for people with mental disabilities and their carers. The non-randomized studies show similar results. Of note, the studies primarily measure health and quality of life outcomes and not outcomes from the other domains (e.g., education and livelihoods).

In “Disability and social protection programmes in low- and middle-income countries: A systematic review,” Banks, et al. (2016) include studies that look at access to social protection programs as well as studies that measure the impacts of social protection programs for people with disabilities. They put no restriction on study design in their search, and their final set includes 15 studies, 11 of which are quantitative, and five of which they judge to be high-quality, although I can only identify one as meeting the definition of an impact evaluation. None is an RCT. Of interest, more than half of the included studies were conducted in South Africa. Banks, et al. conclude “benefits from participation [of the disabled in social protection programmes] are mostly limited to maintaining minimum living standards and do not appear to fulfil the potential of long-term individual and societal social and economic development.”

In “Interventions to improve the labour market for adults living with physical and/or sensory disabilities in low- and middle-income countries: A systematic review”, Tripney, et al. (2017) identify 14 included studies that attempt to measure an effect against a counterfactual, but assess all of them as having a high risk of bias. None is an RCT. All the studies measure some kind of labor market outcome with most including employment participation as one indicator. Tripney, et al. report that in all studies the measured direction of effect was positive, but only five “reported results of tests of statistical significance and indicated that study findings were significant.” The review authors conclude, “our assessment of the evidence does not allow us to develop practical suggestions on what interventions are likely to work, for whom, and when.”

One thing all four reviews has in common is a strong conclusion by the authors that we need more high-quality studies of the impact of disability inclusive interventions in low- and middle-income countries.
The fourth systematic review comes at the question of interventions for people with disabilities a bit differently. In “What is the evidence that the establishment or use of community accountability mechanisms and processes improves inclusive service delivery by governments, donors, and NGOs to communities?” Lynch, et al. (2013) look at interventions intended to improve accountability mechanisms, which in turn are intended to improve inclusive service delivery, where Lynch, et al. are concerned about the inclusion of six minority or marginalized groups, one of which is people with disabilities. Unfortunately, none of the 13 included papers (which cover seven studies) focuses on people with disabilities.

One thing all four reviews has in common is a strong conclusion by the authors that we need more high-quality studies of the impact of disability inclusive interventions in low- and middle-income countries. Tripney, et al. state a key finding of their review is the “overall scarcity of robust evidence.” Banks, et al. state, “the most notable finding of this review is that there is a dearth of high-quality, robust evidence in this area, indicating a need for further research.” Iemmi, et al., who do have some high-quality included studies (likely a benefit of the intersection of their review question with health research) nonetheless conclude, “the methodological constraints of many of these studies limit the strength of our results.”

What this means for research going forward is that we don’t have many examples we can use for how to randomize assignment of disability inclusion interventions over large samples.
In fact, looking more closely at the RCTs in Iemmi, et al., we see that nine out of ten had sample sizes less than 100, and the review authors report that for many of these RCTs, they were unable to get information from the publication or from the authors about the randomization procedure.

What this means for research going forward is that we don’t have many examples we can use for how to randomize assignment of disability inclusion interventions over large samples. The lack of examples doesn’t mean good studies can’t be done, it just means that we in the development research community need to be creative and innovative. Given the dearth of evidence about what works, we certainly have the clinical equipoise needed to justify random assignment in many cases. In addition, for new programming, financial and logistical constraints often make it impossible to deliver the interventions to all beneficiaries from the beginning. These situations allow for “stepped-wedge” or random roll-out study designs.

We also need to approach the design of these RCTs with a strong understanding of the theory or theories of change that underlie the interventions we want to evaluate.
We also need to approach the design of these RCTs with a strong understanding of the theory or theories of change that underlie the interventions we want to evaluate. In some cases, the theory may be strong enough that we only need to test a mechanism, which we can do with a pilot experiment, in order to inform larger programs. In other cases, it will be important to design studies with random (or when applicable, as-if random) assignment so that we can measure the attributable effects of full programs. For integrated programs, e.g., combining health treatment with labor market assistance, it will be very useful if we can use factorial designs, so that we can learn how the components work separately and together.

The challenge we face in producing the needed evidence is not nearly as great as the challenges faced by people with disabilities, particularly those in low- and middle-income countries. It is incumbent upon us to meet this challenge.

Photo caption: Men with disabilities build wheelchairs at an organization assisting persons with disabilities in Tahanang Walang Hagdanan, Cainta province, Rizal, Philippines.
Photo credit: © 2008 Gregorio B. “Jhun” Dantes Jr., Courtesy of Photoshare

Sharing is caring!