Book review: Lean Impact by Ann Mei Chang (and thoughts on science and the scientific method)

I’ll admit than when I saw the book title for Ann Mei Chang’s new book Lean Impact, my aversion to MBA jargon left me less than inclined to read it. I was, however, familiar with Chang’s impressive background combining private sector and public sector experience with an emphasis on innovation, so I couldn’t help but be intrigued. In the end, Patrick Fine recommended I read the book, and when your company’s CEO recommends a book, it is generally a good idea to read it. So I read Lean Impact, and I am glad I did. I learned much more about the lean approach, and Chang’s insights into social innovation gave me good food for thought.

…I read Lean Impact, and I am glad I did. I learned much more about the lean approach, and Chang’s insights into social innovation gave me good food for thought.
Overview of the book

Chang begins the book by establishing the social innovation context for her lessons. Her first chapters argue that the purpose of innovation here is to achieve social impact, that we should think big (have “audacious goals”) when working on social development, and that we need to be more driven by the problems that we want to solve than by the solutions that we devise. This last one should be a given for those working on social development, but when we gain years of experience implementing specific approaches, we are just as prone as anyone to become hammers looking for nails.

She then summarizes the main points from The Lean Startup by Eric Ries, whose work is inspired by the concepts of just-in-time manufacturing. She summarizes the five steps for lean startup (identify assumptions, build a minimum viable product, use validated learning, build and measure and learn, and pivot or persevere) and then expands on these ideas in the context of social innovation. She argues in several places that this model mimics the scientific method, especially in the validated learning step when the minimum viable product is iteratively tested.

The middle section provides more discussion of how to and why to validate. Here Chang organizes her discussion around the objectives of value, growth, and impact. In the Value chapter, she gives examples of how to design minimum viable products to bring value to the targeted stakeholders. In the Growth chapter, she delves into the challenge of starting small and fast but with the aim of eventually taking a solution to scale. In the Impact chapter, she looks at how we understand and measure social impact. The final section, called Transform, is where she really dives into the challenges for using a lean startup approach in the social development sphere as it is currently organized and funded.

What I liked

There are many things I like about Chang’s book. It is an enjoyable read. She writes well and uses many interesting examples. I appreciate that her examples span many countries and contexts. While those who know her from her time at USAID will be looking for the L&MIC examples, there is also much we can learn from her high-income country examples. Her first section, which she titles Inspire, did just that for me. The arguments here aren’t new, but her presentation did focus my mind on problems and inspired me to think about big ideas.

I was glad to see [Chang] criticize the use of what she calls “vanity metrics”.
I was glad to see her criticize the use of what she calls “vanity metrics”. She says, “Vanity metrics have spread through the social sector like a communicable disease. If you go to the website of your favorite mission-driven organization, I bet what you’ll see highlighted is the number of people it has served or reached. While at USAID, I constantly railed against the continual pressure to share the number of people we’d ‘touched.’ It’s meaningless.” I’m pretty sure I exclaimed out loud, “you go girl!” when I read that passage. (There’s a great recent commentary in The Lancet about the problems with The Global Fund’s “lives saved” metric.)

I also appreciate her discussion of randomized controlled trials (RCTs) in her chapter on impact. She is concerned that sometimes the use of RCTs to test interventions can “constrain rather than enhance innovation”, especially when large studies require the intervention to continue without adaption for a significant period of time. She also sees a potential “danger” if past RCT results, perhaps from different contexts, become overly influential when seeking evidence-based policies. Finally, she reinforces a point made by many that academics, who are often the preferred providers of RCTs, have incentives that are not always aligned to the needed validation exercise.

Finally, she hits the heart of the problem in the last section of the book where she explores the challenges of systemic change and how innovation in the social sector is funded. I love that she even has a chapter titled, “A message to funders”, as I am sure many who read the book spend the early chapters rubbing their foreheads and muttering, “how are we supposed to do that within the constraints of our grants or contracts?”

What gave me pause
…I came away from the book excited about using the lean approach for product development but even more skeptical about using it for program design…
As much as I would love to have a simple formula for innovation in the social development sector, I came away from the book excited about using the lean approach for product development but even more skeptical about using it for program design, especially for programs addressing systemic development challenges like governance, private sector development, and poverty reduction. I say even more because I was already skeptical of using human-centered design, which Chang calls “a close cousin to Lean Impact”, for designing such programs.

An example from the book supports my skepticism about using the lean impact approach to innovate for programs. Chang highlights No Lean Season as a successful program with innovation developed using elements of the lean impact approach. (The program seeks to help poor rural families through the lean season between planting and harvesting by giving them subsidies to travel to urban areas and take temporary employment.) At the time Chang wrote her book, No Lean Season was indeed a well-publicized example of an innovative program that was scaled up after its value was proven by an RCT of a pilot. Soon after Chang’s book was published, however, Evidence Action released the results of the evaluation of No Lean Season at scale and admitted that it does not work. The publicity surrounding this revelation rightly lauded Evidence Action for its “transparency and evidence-driven work” but did not examine whether and how the innovation process might have produced a better intervention.

My skepticism about using the lean approach for designing new programs is linked to what I consider an important distinction between the scientific method and science.

My skepticism about using the lean approach for designing new programs is linked to what I consider an important distinction between the scientific method and science. Chang repeatedly likens the lean approach to the scientific method. I googled “scientific method” and was surprised to find there is not a single accepted list of steps. But what Chang seems to have in mind is something like observation, hypothesis formation, hypothesis testing (experimenting), making conclusions, rinse and repeat. In the lean approach, the hypothesis formation step is the development of a minimum viable product. She recommends the use of ideation methods for developing a minimum viable product. I have participated in IDEO workshops and agree these are great methods for product design.

What about theory?

It is my view, however, that to address complex challenges we need science, or here social science. I understand science as the process of using the scientific method to explore and prove theories. Social science theories allow us to make predictions, to understand how something works in general or at least how it should work the next time or how it might work in a different situation. When we use the scientific method in social science, we should start with theories and the existing evidence related to those theories. Our hypothesis testing should build on that foundation. The science process means that we are starting with the best hypotheses based on what is already known and also ensures that our new tests help elucidate the theories. That is a very different process from making observations and then brainstorming new ideas to test.

Most of my margin notes in the first half of the book are some variant of “what about theory of change?”
Most of my margin notes in the first half of the book are some variant of “what about theory of change?” Then, in the chapter titled Impact, Chang does address theory of change, but she trips up as many do by using a results framework to talk about theory of change. The arrows in a results framework – how you get from one step to the next – often conflate theory with assumptions. However, theories and assumptions are different. I discuss this problem at more length here. In Chang’s tutoring example, “tutors are competent” is indeed an assumption, and we can test the assumption by testing the knowledge and skills of the tutors. But “test scores increase” is not an assumption, rather it is the prediction of a theory, in this case a theory about how students learn (or perhaps a theory about how students do well on tests).

The lean application of the scientific method can tell us whether a tutoring intervention designed in a particular case increased those students’ test scores. With the social science application of the scientific method, we use theory and evidence combined with observation (context) to design the next best learning intervention to be tested instead of just a new intervention to be tested. And the experiment can provide more evidence about how students learn.

Chang gives a good example of what can happen when the lean approach is used without theory, although she uses the example to make a different point. In her Growth chapter she describes the situation in the mHealth sphere where hundreds of pilot innovations have been tested in isolation while few mHealth interventions reach scale. The ICT4D evidence mapping work I’ve done with co-authors indeed found a large number of mHealth impact evaluations, more than 90% of which test pilot interventions. I have read many of these studies, and very few of the authors motivate their innovation with a discussion of the behavioral theory their innovation employs or addresses (or what formative research they conducted to support the relevance of that theory to the problem they want to solve). In many of the mHealth studies, the pilot interventions failed, and the only lesson we are left with is that those mHealth ideas didn’t work in those cases.

Iteration for program design

I am not arguing in favor of what we often see, which are programs designed using past practice and background research that are scaled up and evaluated after the fact. I strongly support more iterative processes with quick learning and more space for innovation. I see the iterations more as when and how we collect new data in the process rather than testing a program, pivoting, testing another program, and so on. The first iteration should be formative research to understand the problem and the context. Formative research is the observation stage in the scientific method but with an emphasis on rigorous data-driven observation. The design process (hypothesis formation), which can still use ideation methods, should incorporate theory, existing evidence, and the findings of formative research. The next iteration is formative evaluation, which can also be thought of as feasibility testing or acceptability testing. This iteration does not test for outcomes, just for whether the design can be implemented. After the refinement of the design, the next iteration is a pilot evaluation, which ideally should measure attributable outcomes related to the desired impact. Once an intervention is successfully piloted, the next iteration should be testing at scale. Scale is something Chang strongly emphasizes. She argues that social innovations should not just be based on a value hypothesis (what the innovation does) but also a growth hypothesis (how it can go to scale to bring big impact).

[Chang] argues that social innovations should not just be based on a value hypothesis… but also a growth hypothesis.

Chang does not rule out the process I describe, and in fact some of the organizational models she discusses and advocates in the Transform section of the book use similar processes. Like the lean startup approach, these processes emphasize testing and the willingness to pivot based on the results of testing. But they depart from the more simple lean startup approach described early in the book.

I am also not arguing that every new program needs to be designed and tested using a long scientific process. Sometimes we are working with straightforward interventions where we are innovating on the margin and can test quickly. Sometimes we only need the yes or no answer about an intervention, especially when working with policymakers who only have that choice. I’m thinking here about the difference between decision-focused evaluations and knowledge-focused evaluations as laid out by Shah, Wang, Fraker, and Gastfriend (2015) (also cited by Chang). Not every impact evaluation of a program needs to be knowledge focused, but when the objective is innovation and the program is addressing a complex problem, we should strive to learn more from our testing than yes or no.

Final thoughts
…there is much we do in social development that is neither product design nor program design, and the concepts in lean impact can be useful for many of those tasks.
In many ways for me, the crux of the book is the discussion in the last section of the book about why it is so difficult to innovate, by any process, given the current funding mechanisms and organizational structures in the social development sphere. Chang does not provide an easy solution, but presents some good ideas.

My final note is that there is much we do in social development that is neither product design nor program design, and the concepts in lean impact can be useful for many of those tasks. I was recently tasked with developing a concept to meet an internal corporate need, and as I was struggling with how much I should read to develop the best concept myself, I realized that this is a great task for a lean approach. I am now starting with a “minimum viable concept” and presenting it to various stakeholders so that I can revise and “test” several iterations of the concept. Chang’s book gave me both a useful tool and good food for thought.


Photo credits: Mockup psd created by Ydlabs – Freepik.com, Background image created by Creativeart – Freepik.com, Basic Books

Sharing is caring!