At the wee hours of Thursday, June 14th I was forced by not-so-fresh supermarket sushi (I know, I should have known better) to pull out of a panel discussion at the Social Impact Exchange conference in New York City. In its third year, the Social Impact Exchange’s annual conference brings together nonprofits, funders, advisors, intermediaries, researchers, and practitioners to highlight and discuss how we can best support the scaling and replication of effective programs, systems, and/or policies so the benefits are experienced by a broader segment of the population.
I’m sure you’re wondering what bad sushi has to do with scaling and would be right to conclude that in most scenarios, not very much. However, within the insanity of back-and-forth trips to the bathroom, I couldn’t help but draw connections between my food poisoning experience and a key issue that vexes and draws us to the Social Impact Exchange meeting: How can we best scale evidence-based programs, systems, and policies in a manner that maintains and continually produces positive outcomes? After all, the only reason why we are (or should be) interested in expanding these programs, systems, and policies is because we’re interested in expanding their impact. We need to expand the conversation from what supports are needed to expand programs, systems, and policies to how, in the midst of the growth, we maintain positive outcomes.
As Patrick McCarthy, president and CEO of the Annie E. Casey Foundation said at the meeting, we know how to scale. We can all point to examples of excellently scaled programs, systems, and policies that have little-to-no evidence of effectiveness, have lost effectiveness amidst scaling, or, worst case scenario, are doing more harm than good. And I don’t believe (or at least hope) anyone advocates for scaling in a way that ignores outcomes. However, what I am saying is that unless we make explicit the importance of monitoring impacts and maintaining effectiveness throughout the process of scaling, we run the risk of increasing our ability to efficiently scale programs, systems, and policies while simultaneously decreasing their ability to produce positive outcomes.
But I’m sure you’re still wondering what this has to do with bad sushi. As I thought about how the bad sushi got to me, I thought about the huge (i.e. scaled) operation that got the sushi to me to begin with. As a consumer, I make certain assumptions about the mechanisms put in place to ensure that I get a quality product. Without question, there are major issues with our food safety system. But those issues notwithstanding, many of us trust that from farm to table, or in my case, from sea to sushi chef to supermarket to table, that the food we consume will be safe. However, there may be a time in which failure (i.e. non-desired outcome) will happen, which in my case, was foodborne illness. As I thought about this failure or non-desired outcome, I couldn’t help but wonder whether my case was due to issues with the system or whether my case was an outlier (i.e. due to chance).
I’m sure the pessimists out there will ask what difference it makes if my foodborne illness was the result of a systems issue or if I was an outlier, I still got sick. But I believe there is a HUGE difference, which brings me back to how we scale impact. For the most part, we’ve allowed ourselves to believe that as we scale, if we maintain fidelity to the evidence-based program, system, or policy, we’ll sustain the positive impacts. More often than not, however, this is not the case. Therefore, in the same way we’re reframing scale to focus on impacts, we need to reframe our view from maintaining fidelity to the model to fidelity to impacts. With this shift comes the need to build into our scaling work, processes that utilize data to continuously monitor the quality and impact of our scaling efforts so adjustments can be made to the programs, systems, and policies and so we can continue to produce positive outcomes. Otherwise, we’ll wind up with a bunch of scaled programs, systems, and policies where the positive impacts are not consistent, expected outcomes of the scaled effort, but rather, outliers.
Brenda L. Henry-Sanchez, PhD, MPH is a senior program officer on the Vulnerable Populations team at Robert Wood Johnson Foundation. Focused on research and evaluation, Brenda oversees evaluations of the team’s scaling efforts and works to improve the team’s approach to sustaining impacts through growth.