Before we introduce the innovations, we review the standard methods using the examples of two portfolio reviews we recently conducted for USAID missions in Mozambique and Tanzania. Both reviews focused on the mission’s investments in economic strengthening for orphans and vulnerable children (OVC) populations. These portfolio reviews drew on a range of commonly-employed methodologies, including desk reviews of program documents to identify relevant activities and investments and extract key data points. Examples of such data points are programmatic elements, geography, target groups and expected outcomes. The portfolio review teams also mined existing monitoring and evaluation data for intermediate outcomes and longer-term results where available.
In addition, the investigations drew heavily on primary data collection in the form of retrospective qualitative interviews and focus groups among program staff and beneficiaries, to examine everything from basic operations to health impacts. Taken together, these sources provided key insights on program effectiveness, impact and sustainability. Particularly salient was a deep-dive into the results of shared innovative targeting strategies that promoted deliberate mixing of people living with HIV with mainstream community members in savings groups. (In sum, these strategies worked out quite well.) A summary of the Mozambique review is here.
While these traditional portfolio reviews answered important questions for these donors about the effectiveness of their interventions, innovative techniques can strengthen these kinds of inquiries by providing a deeper and more nuanced analysis of trends across a portfolio. Recently, we along with some colleagues brainstormed about ways to elevate our approach to portfolio reviews using less common methodologies. Here we highlight two of the proposed strategies.
Qualitative comparative analysis
Qualitative comparative analysis (QCA) combines elements of qualitative and quantitative analysis to understand which combinations of factors work together to produce specific outcomes in a particular setting. QCA is useful for examining complex relationships where multiple factors are likely contributing to a particular outcome.
In the context of a portfolio review, QCA collects disaggregated data on these kinds of factors across all projects in the portfolio to test hypotheses about which combinations of these factors were necessary or sufficient for success in a particular outcome. This level of granularity helps funders understand not just what works and what doesn’t, but what programmatic ‘recipes’ led to their success and what specific gaps may have caused a program to fall short of expected results.
We are developing our expertise in QCA through two ongoing studies under our ASPIRES project. The first examines the factors necessary for successful prevention of family/child separation and reintegration of children into family care in Uganda, while the second examines the combinations of economic and social conditions linked with antiretroviral therapy adherence in Mozambique.
Cost analysis
Many portfolio reviews consider the budgets, or the total costs, of the programs reviewed. In fact, our brainstorming was motivated by a request for proposals from the Bill & Melinda Gates Foundation that specifically requested that costs be considered as part of a portfolio review. We believe that cost analysis that is more detailed and in-depth than the norm can contribute significantly to the usefulness of a portfolio review.
One question that comes up in portfolio reviews is which programs should be expanded or scaled. In order to assess scalability, funders need to have a detailed understanding of cost. The composition of the resources required to operate a program, that is, the fixed costs vs. the variable costs, impacts the potential scalability of the approach. When fixed costs are high and variable costs are low, then a program that has already incurred the fixed costs may be inexpensive to expand. But if the question is whether to replicate the program someplace else, then it is important to understand the fixed costs will need to be incurred again, and therefore the average cost per program output will seem high until the program reaches a certain size. From such observations, we can draw strong, empirical conclusions about the costs of scaling and replicating programs that are performing well in a portfolio.
The challenges in making cost comparisons across programs abound. They include: controlling for differences in the scale of the programs, accounting for potential start-up costs that give an advantage to more mature programs, capturing the extent to which resources are flowing from external sources that complement the funder’s investment, controlling for the differences in the breadth of the programs, and acknowledging staff turnover and other sources of unexpected costs. And, admittedly, understanding all these cost drivers is not very informative unless you also know how these costs may change over time, especially due to contextual factors. In order to go the next step and compare the cost-effectiveness of different programs in a portfolio, you need a common metric for program effectiveness and that metric should be an accurate reflection of what the program was seeking to achieve. The selected measure of effectiveness should be something relevant and interpretable by the intended target audience.
To incorporate rigorous costing techniques in portfolio reviews, we include an economist on the team. With this expertise, we can address the challenges in making cost comparisons and deliver critical information to the funder about the (sometimes sobering) realties of program cost and practicality for scale-up.
Portfolio reviews are a valuable tool for funders to plan strategically. We believe this tool can be strengthened by multidisciplinary teams that can draw on both new methods and old theories.
Photo credit: Designed by Freepik