New directions in portfolio reviews

Because funders use portfolio reviews to make strategic decisions about programmatic directions and resource allocations, innovations in this type of evaluation can bring large benefits.
A funder portfolio review is an evaluation of a set of programs or activities that make up a portfolio, typically defined by a sector or a place. Portfolio reviews take many forms, but the purpose is generally the same: to take stock and reflect on activities or investments in a particular area of programming. They are often requested by funders to answer straightforward questions about what’s working and what isn’t working in order to figure out what to do next. Portfolio reviews typically include a desk review of program documents and often include other data collection such as interviews and focus groups. See one how-to note here. Because funders use portfolio reviews to make strategic decisions about programmatic directions and resource allocations, innovations in this type of evaluation can bring large benefits. In this post, we briefly introduce two new directions for portfolio reviews.

Before we introduce the innovations, we review the standard methods using the examples of two portfolio reviews we recently conducted for USAID missions in Mozambique and Tanzania. Both reviews focused on the mission’s investments in economic strengthening for orphans and vulnerable children (OVC) populations. These portfolio reviews drew on a range of commonly-employed methodologies, including desk reviews of program documents to identify relevant activities and investments and extract key data points. Examples of such data points are programmatic elements, geography, target groups and expected outcomes. The portfolio review teams also mined existing monitoring and evaluation data for intermediate outcomes and longer-term results where available.

In addition, the investigations drew heavily on primary data collection in the form of retrospective qualitative interviews and focus groups among program staff and beneficiaries, to examine everything from basic operations to health impacts. Taken together, these sources provided key insights on program effectiveness, impact and sustainability. Particularly salient was a deep-dive into the results of shared innovative targeting strategies that promoted deliberate mixing of people living with HIV with mainstream community members in savings groups. (In sum, these strategies worked out quite well.) A summary of the Mozambique review is here.

Recently, we along with some colleagues brainstormed about ways to elevate our approach to portfolio reviews using less common methodologies.

While these traditional portfolio reviews answered important questions for these donors about the effectiveness of their interventions, innovative techniques can strengthen these kinds of inquiries by providing a deeper and more nuanced analysis of trends across a portfolio. Recently, we along with some colleagues brainstormed about ways to elevate our approach to portfolio reviews using less common methodologies. Here we highlight two of the proposed strategies.

Qualitative comparative analysis

Qualitative comparative analysis (QCA) combines elements of qualitative and quantitative analysis to understand which combinations of factors work together to produce specific outcomes in a particular setting. QCA is useful for examining complex relationships where multiple factors are likely contributing to a particular outcome.

Qualitative comparative analysis (QCA) collects disaggregated data across all projects in the portfolio to test hypotheses about which combinations of these factors were necessary or sufficient for success in a particular outcome.
Many development program portfolios exhibit complexity. For example, economic strengthening programs for OVC might have multiple program components (e.g., cash transfers plus food support and income generating activities) or just one of these components. They may differ from one another by whether they are implemented in urban or rural settings, whether they are community-based or home-based, whether they targeted younger OVC or older OVC, whether they have stable or unstable financing structures, and whether they have high levels of government buy-in or not. Any of these factors could be important for understanding whether the intervention was effective at improving household economic stability.

In the context of a portfolio review, QCA collects disaggregated data on these kinds of factors across all projects in the portfolio to test hypotheses about which combinations of these factors were necessary or sufficient for success in a particular outcome. This level of granularity helps funders understand not just what works and what doesn’t, but what programmatic ‘recipes’ led to their success and what specific gaps may have caused a program to fall short of expected results.

We are developing our expertise in QCA through two ongoing studies under our ASPIRES project. The first examines the factors necessary for successful prevention of family/child separation and reintegration of children into family care in Uganda, while the second examines the combinations of economic and social conditions linked with antiretroviral therapy adherence in Mozambique.

Cost analysis

Many portfolio reviews consider the budgets, or the total costs, of the programs reviewed. In fact, our brainstorming was motivated by a request for proposals from the Bill & Melinda Gates Foundation that specifically requested that costs be considered as part of a portfolio review. We believe that cost analysis that is more detailed and in-depth than the norm can contribute significantly to the usefulness of a portfolio review.

We believe that cost analysis that is more detailed and in-depth than the norm can contribute significantly to the usefulness of a portfolio review.

One question that comes up in portfolio reviews is which programs should be expanded or scaled. In order to assess scalability, funders need to have a detailed understanding of cost. The composition of the resources required to operate a program, that is, the fixed costs vs. the variable costs, impacts the potential scalability of the approach. When fixed costs are high and variable costs are low, then a program that has already incurred the fixed costs may be inexpensive to expand. But if the question is whether to replicate the program someplace else, then it is important to understand the fixed costs will need to be incurred again, and therefore the average cost per program output will seem high until the program reaches a certain size. From such observations, we can draw strong, empirical conclusions about the costs of scaling and replicating programs that are performing well in a portfolio.

The challenges in making cost comparisons across programs abound. They include: controlling for differences in the scale of the programs, accounting for potential start-up costs that give an advantage to more mature programs, capturing the extent to which resources are flowing from external sources that complement the funder’s investment, controlling for the differences in the breadth of the programs, and acknowledging staff turnover and other sources of unexpected costs. And, admittedly, understanding all these cost drivers is not very informative unless you also know how these costs may change over time, especially due to contextual factors. In order to go the next step and compare the cost-effectiveness of different programs in a portfolio, you need a common metric for program effectiveness and that metric should be an accurate reflection of what the program was seeking to achieve. The selected measure of effectiveness should be something relevant and interpretable by the intended target audience.

To incorporate rigorous costing techniques in portfolio reviews, we include an economist on the team. With this expertise, we can address the challenges in making cost comparisons and deliver critical information to the funder about the (sometimes sobering) realties of program cost and practicality for scale-up.

Portfolio reviews are a valuable tool for funders to plan strategically. We believe this tool can be strengthened by multidisciplinary teams that can draw on both new methods and old theories.

Photo credit: Designed by Freepik

Sharing is caring!