Due to their key role in economic development, policy implementation, health, and social welfare, nonprofit and social enterprise organizations operate in an era of heightened pressure to demonstrate performance and provide concrete measures of impact to their funders and other stakeholders (Benjamin 2008; Carman 2010; Lynch-Cerullo and Cooney 2010). The demand for outcome measurement in the social sector accelerated in the 1990s and 2000s as business approaches to social sector work ascended, influencing mandates for strategic planning and measured outcomes such as those embedded in the Government Performance and Results Act of 1993 in the US, and spurring new fields of practice including venture philanthropy and impact investment. Public agencies funding nonprofits in the human services arena expanded the use of competitive bidding, performance-based contracting, and evidence-based practice requirements which allowed for closer monitoring of the outputs proposed and the outcomes promised by private organizations implementing public policy goals (Collins-Camargo, McBeath, and Ensign 2011).
Simultaneously, other social sector funders such as foundations and private philanthropists promoted rigorous evaluation practices and data-driven management models that strive to design, test, and replicate empirically proven models to tackle difficult social problems (Brest, Harvey, and Low 2009; Brest 2012). In an era of scarce resources and the perception that little progress has been made on entrenched societal issues, the trend in metric-driven performance measurement and management aims to scale innovative solutions by consolidating public and philanthropic capital around projects that work.
Despite a growing consensus on the utility and indispensability of good performance measure and management systems for producing results, government, foundations, venture philanthropists, impact investors and individual donors all champion different methodological approaches for assessing progress toward performance goals. As a result, nonprofit organizations operating in the second decade of the twenty-first century face a dizzying array of approaches and tools for measuring performance, including balanced scorecards, cost efficiency metrics, financial ratios, logic models, Social Return on Investment (SROI) calculations, cost–benefit analyses (CBAs), and participation in rigorous random assignment, control group research studies. One of these approaches, the SROI metric, is essentially a CBA that estimates the value of the benefits created by the social sector initiative (the outcomes) and places them over the costs of the initiative (the inputs or investment). SROI goes a step further than a Cost-Effectiveness Analysis, which simply estimates the cost per impact (i.e. the cost per person cured of malaria), as the outcomes are monetized.
Throughout the 1990s and 2000s, many efforts to develop SROI methods were underway. We highlight three approaches to calculating the SROI that are still in currency today: the blended value return created by the Roberts Enterprise Development Fund (REDF), the Benefits Cost Ratio (BCR) developed by the Robin Hood Foundation in New York City, and the SROI approach promoted by the SROI Network in the U.K. In each case, although the calculations differ, the SROI approaches utilize economic language and disciplined metrics-based thinking from the banking and finance sectors and translate it to assess the social impact of mission-driven organizations.
The basic approach to SROI builds on the logic model or impact map exercise that many nonprofit and social sector organizations undergo as part of their strategic planning and organizational development process. The promise of the SROI approach is manifold. Firstly, an SROI measure provides a way for an organization to illustrate the broad social value created by their services or activities using economic language. Secondly, an SROI can assist in nonprofit management and decision making, driving resources to the most impactful programs. Thirdly, calculating an SROI metric to organizations in a given subsector (education or job training, for example) can allow for benchmarking across similar programs and provide a basis for comparisons. Finally, given the ability to compare social value creation across a heterogeneous set of interventions by reducing social value to a common currency, SROI metrics broadly applied could increase the efficiency of philanthropic capital flows such that high performance organizations are more likely to have access to the capital needed to scale their social impact.
Despite the excitement surrounding SROI, as this paper will show, the early work on SROI highlighted a set of methodological issues that remain thorny for both the organizations developing and applying the metrics and for the funders attempting to use these metrics to drive their investments. After a short overview of the reigning methods for calculating the SROI, this paper draws data from a study the authors conducted constructing an SROI for a nationally recognized Boston-based nonprofit operating workforce development programs; and a set of Yale SOM MBA student assignments conducting an SROI on a written case study found in the literature on workforce development.
The paper consequently offers two levels of analysis. First, it provides an overview of the methodical decisions faced when conducting an SROI, using the two case studies to illuminate these decision points. Second, it provides an assessment of the promise and the challenges to using SROI as a performance metric. It should be noted that both case studies are in the field of workforce development, a field readily accepted as most conducive to SROI analysis. This is because SROI requires agreed to outcomes and indicators, which are clearly available for workforce development interventions (i.e. job placement, wages and hours, retention, and upgrades) and due to the fact that these outcomes are among the most easily monetized.
Data and methods
Data are taken from two case studies. The first, an SROI for JVS, Greater Boston, includes participant observation of the entire SROI analysis and implementation from the standpoint of the consultant/academics assisting the agency with the SROI assessment, beginning in Spring 2011 and culminating in a public release of the study at a breakfast event at the John Hancock building in Boston in November 2012 (Cooney and Lynch-Cerullo 2012). The process included several meetings with the executive staff to conceptualize the SROI approach and debrief about each new set of findings, a full board presentation of initial results, consultation with the financial services committee of the board and IT staff at JVS, and the public presentation at a community forum to disseminate the methodology and results. Additionally, archival data from JVS agency reports, 990 tax forms and website were collected.
A second source of data is a course assignment from a Yale School of Management MBA class called “Managing Social Enterprises” taught by Kate Cooney in Fall 2012. The assignment required student teams to conduct an SROI for an organization of their choosing or a case provided for them. Five student teams chose to do the case provided and results from their SROIs are included to highlight how seemingly small methodological decisions can result in very different final numbers.
Three approaches to SROI
Cost-savings analysis – REDF/blended value approach to SROI
One of the earliest movers in this space was the Roberts Enterprise Development Fund, working closely with Jed Emerson, to develop a social accounting model of estimating the ROI for social enterprises that were operating business enterprises with strongly integrated social components. REDF provides philanthropic capital and infrastructure support for a portfolio of work integration social enterprises that operate commercial businesses as vehicles for employment and job training for disadvantaged workers. The blended value approach to SROI hewed closely to corporate finance accounting practices, using the following basic formula:
In this approach, a social enterprise calculates the net enterprise value (revenues less the cost of goods sold, operating costs, working capital investment, and capital investments, depreciation costs) for the commercial businesses (e.g. ice cream shops, bicycle stores, restaurants, or other work integration business) and adds it to the social purpose value created. The social purpose value is narrowly defined as savings to the public sector by way of reduced welfare payments to the clients and reduced recidivism, and gains to society in terms of increased taxes paid out of the newly or more profitably employed clients’ paychecks. These benefits are valued in monetary terms net of the social costs involved with employing these disadvantaged clients in the businesses. Social costs might include the cost of employing staff to do ancillary mental health counseling, job search, job coaching, and life skills workshops offered to the workers in addition to their work in the commercial enterprises.
From public savings to client benefits – Robin Hood Foundation BCR
The Robin Hood Foundation, an anti-poverty foundation in New York City, pursued a different approach. Founded and funded by investment bankers, the Foundation developed the Benefits Cost Ratio (BCR), an approach to SROI best suited to provide a guide to investment decision making for the grants portfolio. For this purpose, the benefits in the numerator of the SROI had to directly reflect the poverty fighting potential of the range of interventions under review and the costs were narrowed to the amount of the Robin Hood investment (formula below):
This approach offered several advantages. First of all, narrowing the focus of the social impact measure to monetizing benefits accrued to clients aligns well with Robin Hood’s anti-poverty mission and allows them to align the SROI with the impact that stakeholders most care about (reducing poverty). Secondly, such a narrow approach produces a metric that has the potential to drive performance management – it is a metric you can manage around. If the SROI decreases, it means that a program is not achieving as much of an earnings differential for the client (in terms of poverty alleviation) compared to another program of equivalent cost.
Additionally, building on the models from gold standards in social science research methods, Robin Hood built into their methodology deductions to take into account the counterfactual (what would have happened to a control group), drop-off (to consider how long the effects could be plausibly contributed to the intervention), and attribution (to account for the contribution Robin Hood made to the impact as proportional to its investment) (Weinstein 2009). In the formula above, you can see how the attribution is figured in by discounting the overall BCR by the Robin Hood Factor (or RH factor); that is, the proportion of the total program cost that was Robin Hood’s investment. Like REDF, Robin Hood also discounted all future benefit flows to the present value.
Stakeholder specific – SROI Network-UK
A third approach to SROI, developed by the SROI Network in the UK offers a flexible approach to SROI that combines many of the elements of both the REDF and the Robin Hood approach (see formula below). In contrast to REDF and the Robin Hood Foundation, which pursue SROI from the funders perspective, the SROI Network-UK methodology can be adopted by either a funder or social enterprise, but is more oriented for use by an organization seeking to demonstrate ROI for its stakeholders (as heterogeneous as they may be):
In this approach a nonprofit or social enterprise creates a list of stakeholders (any group touched by the activities of the organization). From here, the stakeholder list is narrowed for the organization’s SROI according to the organization’s own decision rules about which stakeholder to include and why. The valuation of social benefit is calculated per stakeholder and then aggregated. Therefore, the numerator can combine the costs savings for public sector stakeholder and the client benefits by way of earnings boost. Similarly, for the denominator, the organization can include the costs of inputs supplied by all the stakeholders or just one key investor, depending on the organization’s preference. In this way, the SROI Network, whose website features extensive guides and worked examples for how to proceed through an SROI step by step, serves as a generic guide to SROI that encompasses many of the innovations from both the Robin Hood and the REDF efforts. Like Robin Hood, the benefit is discounted for the counterfactual, drop-off, and attribution. Perhaps because of the variability of the approaches possible using their guides, the SROI Network has also spearheaded a set of principles for conducting SROI that include: transparency, avoidance of over-claiming, understanding the theory of change under examination, valuing what matters, and verifying results.
Ongoing methodological dilemmas in SROI implementation
Much of the promise of SROI potentially lies in its capacity as a value communicator, it changes the discourse from one of public expenditures and welfare burden to the language of investment and social value creation. It borrows from existing approaches to evaluation (namely, CBA, randomized control trials (RCTs), and corporate finance) through an innovative approach that combines elements of each. In doing so, SROI brings the rigor of quantitative, empirical decision making to the social sector. However, this innovative blending of existing methodologies is both a strength and a liability. For those with the resources to adopt SROI, the ongoing hesitation to proceed with implementing the approach stems from the fact that there is a lack of standards about key aspects of the methodology: scope, methods for valuation (including data collection, calculating the counterfactual, developing proxy measures, attribution, and drop-off), and discounting. Below, we draw on two case studies to highlight the methodological dilemmas that arise at key steps of the SROI calculation and how we resolved them; with attention to the implications for the utility of the SROI metric.
ROI for whom?
The first step in SROI is to determine the set of stakeholders for whom you are valuing the benefits. In the purest sense, traditional CBA approach, the practice is to consider all the potential stakeholders of an organization and the benefits and costs of a project from these multiple perspectives. With a CBA, the emphasis is on determining whether society as a whole is better or worse off. The benefits and costs to the full set of stakeholders in society are considered and Pareto efficiency is sought when “it is impossible to find an alternative allocation that makes at least one person better off without making anyone else worse off” (Weimer 2008, 22). As an early example of how far this can go, in the well-known Job Corps CBA study (Long, Mallar, and Thornton 1981), David Long and his colleagues consider the costs and benefits to job corps members and the rest of society. One of the benefits they calculate is the reduction in crime associated with disadvantaged youth participating in the jobs program. This is measured by reduced cost to the criminal justice system and reduced loss to the crime victims but offset by costs to the criminals who no longer benefit from the revenues associated with burglary. For public sector funding decision making, the social accounting approach holds much validity. Government’s key stakeholder is society, and in determining the best way to invest public tax dollars, the costs and benefits of a social investment across the heterogeneous subgroups in society is a useful guide for what to pursue.
However, in adapting the CBA approach for SROI, REDF’s method of calculating public sector savings faced a number of critiques that the Robin Hood Foundation and the SROI Network-UK learned from and built upon. The first set of issues had to do with aligning the benefit valuation with the mission of the organization. For REDF, early efforts at valuing benefits centered on monetizing the public cost savings (e.g. reduction in certain welfare benefits) that resulted from the job training and employment activities of the work integration social enterprises. While such a social accounting approach mirrors the method for valuing benefits to all societal stakeholders that undergirds the traditional CBAs, it is not well aligned with the mission of the social enterprises who were focused on changing the earnings trajectories of the disadvantaged youth or recently incarcerated individuals that they targeted in their social businesses, not on saving the public sector money. In some cases, combining work with safety net support yielded the best results for moving out of crisis and into sustained self-sufficiency. The REDF SROI performance metrics did not reflect this; instead programs with clients blending wage work in the social enterprises with safety net support had lower SROIs. Further, while public cost savings may be relevant and compelling to government funders, it was not a central motivator for other third party funders, who were more interested in the poverty alleviation mission. REDF struggled with how to adjust a metric that assigned higher value to activities that reduced public spending when reducing public spending was not a motivating goal of the organization or its key stakeholders (Gair 2002). Note that there is a CBA approach known as “Multi-Account Benefit Analysis” that follows the traditional social accounting methods for CBA but rather than reducing the calculation of benefits and costs to all affected parties to a single bottom line, instead produces a matrix of multiple benefit–cost calculations broken out by key stakeholders (environment, consumer, government, etc.) (Shaffer 2010). As we discuss below, the next iterations of SROI did move to narrow benefit–cost analyses to certain stakeholder classes, however did so by process of eliminating other accounts, rather than producing multiple benefit–cost analyses for multiple stakeholders.
As noted above, as certain large philanthropists (such as the Robin Hood Foundation and the Hewlett Foundation) have taken the lead in applying SROI to their investments, there has been a move to restrict the measurement of benefits to just those accruing to clients. Because foundations and other impact investors are guided by their own particular interests (in Robin Hood’s case, the alleviation of poverty), they are less concerned with Pareto efficiency. Thus, a narrower SROI restricting benefits only to program recipients and restricting costs to third party payment of services tracks more closely to the considerations central to philanthropic decision making. While these innovations on the traditional CBA methodology make sense from the perspective of funders hoping to use SROI to guide their investment decisions, departure from the social accounting method of CBA has led to the development of a multiplicity of methodologies for calculating SROI. In the absence of a standardized methodology, it becomes incumbent on both the purveyor and the consumer of the SROI to understand the nuances in various approaches for calculating the metric.
In our work with JVS, Greater Boston, we reviewed the various SROI methods and eventually decided to pursue the Robin Hood approach. It was appealing because an SROI that measured the program impact on client earnings closely tracked the anti-poverty and workforce development mission of JVS and the motivation of their major donors. It is important to note that for the SROI benefit calculation, according to the Robin Hood approach, the benefit calculation simply estimates the potential earnings boost without accounting for costs to the consumer for engaging in this training or any benefits net costs to the public sector in terms of welfare transfer savings or tax remittance. In this way, the calculated impacts are not strictly speaking social returns according to the economic understanding that undergirds the traditional CBA approach. Internally, this approach is preferred as it allows them to compare similar programs to see how they performed against each other on the most salient metric for the organization – how well the intervention translated into higher earning power for the client group. Interestingly, after the final report on the study was released, public funders expressed interest in including an estimate of public savings (reduced costs of welfare dependency, increased revenue from payroll taxes paid) in the metric as well. As we finished our deliverable, JVS returned to the data to consider how to construct another SROI that would conform to the public funders’ interests as well. This development highlights the tradeoffs between utilizing the SROI as a tool for internal management purposes or for communicating with a narrower set of stakeholders versus using the SROI to communicate value for a heterogeneous set of stakeholders but creating a metric with less salience for internal management purposes. The multiple accounts benefit–cost analysis approach mentioned above, which includes separate benefit–cost analyses for key stakeholders, may serve as a useful model that purveyors of the SROI might find adaptable in their quest to develop both metrics useful for internal performance management and metrics most relevant for key outside stakeholders.
The second methodological dilemma in constructing the SROI, and by far the most complicated and time-consuming, arises when developing the formula for monetizing the benefits. This is a two part step that requires first, a decision on a set of definitive outcomes to value, and second, a procedure for deriving a monetary estimate of the value of that outcome. Even with a narrow focus on anti-poverty programs, the Robin Hood Foundations portfolio funds a wide range of interventions, from education, housing, and job training to basic emergency food, shelter, and healthcare. One challenge Robin Hood faced in implementing their Benefits Cost Ratio (BCR) as a metric to compare impact on earnings across all programs is that the social value created by some programs (take health improvements, for example) did not translate as easily into an estimate of impact on earnings as other programs did (job training, for example). The quest for a narrowly defined benefit that would allow commensurability in analysis of the poverty fighting potential of programs across portfolios bumped up against the reality that the link between a specific anti-poverty intervention and earnings boost is more relevant for some interventions than others. However, Robin Hood Foundation’s overall aim was commensurability and the shift to using the BCR forced some difficult decisions to cancel funding to popular programs based on these outcomes. Another internal outcome was the increasing complexity and sophistication of the benefit calculation itself over time, especially as programs within different portfolios realized they had to use this metric to make the case for the worthiness of investment in their health or emergency food programs (Ebrahim and Ross 2010). Even when working with an intervention (e.g. job training or education) that lends itself to the earnings boost calculation, we have found that it can still be challenging to accurately create a value estimate that has some sensitivity to changing conditions. In our work with JVS, where we calculated the SROI for a set of job training and education programs, this calculation appears straightforward – for each client measure the difference in pre- and post-wages, over the cost of the program. However, as we discovered, even with a seemingly straightforward valuation, the analyst is confronted with a series of issues that must be addressed:
A sound SROI is entirely dependent upon the availability of good data. The gold standard, of course, is an RCT study, however, not many programs can afford this investment. Even when implemented, the RCT is a static finding, not one that can be used for performance management in an ongoing way. SROI methodologies mimic RCT protocols by estimating the value of a program as the difference between pre- and post-program earnings after accounting for dropouts and control group performance. Even at JVS, which has a highly sophisticated data collection system, robust data at every relevant data point were lacking. Baseline data of clients earnings at intake were particularly hard to come by, and in one program being studied, the Skills program which trained unemployed and underemployed individuals in industry specific skills in nursing and the Culinary Arts, only 5% of clients had baseline wage data. This is in part because a high percentage of clients at intake were unemployed. The data at intake are also almost always self-report data, whereas post-intervention wages tend to be verifiable.
At JVS, the Skills program we were fortunate to have access to a recently completed RCT study conducted on the JVS Skills programs. This work was done as part of the Sectoral Employment Impact Study conducted by Public Private Ventures, as part of a national study on sectoral employment conducted in the early 2000s. Likely, many if not most nonprofits and funders would not have access to data from such an expensive and sophisticated randomized control study. In the JVS SROI study of Skills, we were able to draw on agency data, taking the average weekly hours of the cross-section data the agency had at baseline (17.75 hours) and average wages ($9.80/h) to construct an estimate of average weekly income ($173.95). Then we used the average yearly length of employment from the PPV study findings (5.5 months), that had study participants report retrospectively on the previous year’s wages, hours and weeks of work, to construct a baseline annual earnings estimate ($173.95*25 weeks = $4,349/year). But, as detailed below, other programs that we worked on required us to develop proxy measures for earning potential at baseline and post-intervention to approximate program impact.
Further, because most nonprofits would not have access to randomized controlled studies, another option, widely used in CBAs, is to use proxy measures or approximations. For example, in the JVS study, when calculating the SROI for the Bridges to College program, an initiative that provided classes and college entrance preparation, we had larger data gaps and no randomized controlled study from which to draw inferences.
Further, we had no agency data on pre- or post-wages, on hours or on length of employment. We did have demographic data (gender, race, ethnicity, and education credential at baseline), and we had data on degree completed and time to degree completed. In developing our proxy measures, this kind of data is critical to tie the estimates to the specific client population.
With demographic data in-hand, an early approach we considered involved using census data and matching demographic data to mean wages reported by the census for level of degree attainment pre and post for the entire Bridges program participants. However, good proxy measures have to be sensitive enough to capture the heterogeneity of the client population, and this approach did not achieve that for the JVS client pool. There are two cohorts of clients in the Bridges program, one group is considered “pre-employment” (meaning largely unemployed), and a second pool of incumbent workers are employed, usually full time, but need or want an education credential to remain employed and/or to be promoted. These two groups, in reality, have very different baseline incomes and therefore different income boosts. Not only did a credible SROI for this program demand more accuracy, but JVS as an early and very high profile adopter of the SROI in the greater Boston community of workforce development organizations, needed a metric that would potentially allow for comparisons across organizations. A blunt approach rooted in the Census would not adequately distinguish NPOs that were serving very high functioning, low need populations from those who engaged in the important but often more costly efforts to work with high need populations.
We proceeded with proxy measures, based in part on census data, but we took pains to calculate two different baselines using secondary data and a sensitivity analysis of multiple proxy approaches to refine our estimates. First, for the pre-employment groups (largely unemployed at baseline) we started with anecdotal data from JVS staff stating a typical wage range for anyone working in the pre-employment group averages about $9.50, for entry level service jobs. Since we had no data on average hours worked, we note that FT/FY hours (the most possible) at this wage would be $19,760. We then explored using Census data estimates for baseline earnings by demographic characteristics and level of educational attainment. However, because JVS serves a high foreign-born population, many of the pre-employment clients were non-English speaking refugees who may have attained college degrees or courses in their native country, but were struggling to transfer these credentials to the US in terms of earnings. Indeed, JVS staff cautioned you could have a client with a foreign law degree, who is driving a cab part-time in the US. Of the pre-employment client population, we found that 65% had been enrolled in college or with an AA or certificate speak ESL and 91% of those with a college degree at baseline speak a foreign language as primary language.
To address this disconnect between actual earnings and educational attainment of the foreign born, we used the lower bound American Community Survey data estimates for PT/FT work of earnings by educational attainment, and limit college earnings to only “some college” to account for the foreign diploma and subtract the ACS earnings penalty of $989/year for those who speak ESL “very well” for all (September, 2011) to get an average baseline earning of $20,580/year. Again, even more so here than above with the Skills program, this figure is not meant to be a close estimate of what pre-employment clients were actually earning when they enrolled in JVS’s program but rather to serve as an approximation for the best case scenario of earnings for the cohort, given degree attainment and ESL status. To test the robustness of this estimate, we calculated a second version of pre-employment baseline earnings based on the federal poverty threshold for a family of four, given that 95% of pre-employment families are at or below the FPL, which also fell close to this $20,000/year range. We went with the highest estimate of the two approaches and then equally inflated the estimates on the post-education earnings side, using the FT/FY census data average earnings by education for the BA and AA and Community College data to estimate the best case for post-intervention earnings for pre-employment cohort. For the incumbent workers, JVS surveyed the employers to construct average pre-earning level ($37,352) and post-earnings ($49,608). After all this work, we had the pre- and post-earnings estimates to construct the income boost benefit calculation that included a higher earning boost for the higher need group (see Table 1).
These efforts had a nice payoff for JVS as overtime. If the proportion of the population they serve in the program shifted, the SROI will also change to accurately reflect the proportions of higher need pre-employment clients versus lower need incumbent workers being served in that program.
The next step in the SROI calculation mimics the RCT protocols to adjust the baseline to account for the counterfactual (what would have happened to the clients without the program?). Both Robin Hood and SROI Network-UK include a series of counterfactual discounts. Typically, to arrive at the appropriate counterfactual deduction, it requires extensive review of the secondary literature and an in-depth knowledge of findings from the most recent RCT studies on the topic or the next best source of data. Following this approach, for the Bridges to College SROI, we take a 10% discount for those with degrees completed for the counterfactual based on research that shows that 3% of those with a GED will earn an AA degree and 9% of those in lowest income quartile will earn a 4-year degree. For those still enrolled, we include an additional 20% discount to account for dropping out of the program, based on historical trends in the JVS data (see Table 1).
For the Skills program, from the PPV study we noted the control group baseline earnings increased by 71% by the end of the study. So we raised our estimate for the baseline 71% to $7,437 to adjust for the counterfactual and assigned $7,437 as the potential earnings possible without the JVS intervention. The post-earnings were based on agency data and revealed an earnings difference of about $11,018 for those who worked the full 12 months and of about $6,404 for those who worked less than 12 months as Certified Nurses Assistants and $12,650 and $7,628 for those graduating from the Culinary Arts program (see Tables 2 and 3). Due to their reliance on relevant RCT study findings and national datasets, the SROI builds upon other evaluation activity in the field and its quality is contingent on these other related empirical data sources. JVS workforce development programs were implemented in a large US city with a very active research community. It is important to note that a program operating in a less actively researched geography may have to rely on secondary data less suited to support the assumptions in their SROI.
Drop-off and attribution
In addition to discounting for the counterfactual, both Robin Hood and SROI Network-UK include a series of discounts to the benefits calculations to address drop-off, or the percentage of clients who might drop out of the program (in projecting future benefits for those still in early stages of training), and attribution (what proportion of the benefits can the third party payer claim as related to its contribution to the intervention and for how long into the future?). If the data are available, the dropout percentage can be calculated from the trends of recent cohorts, or alternatively if the date is not available, from recent randomized, control group studies on a similar population. There are fewer standards in the field for calculating duration of the benefits overtime, either in terms of the proportion of benefits claimed or the time horizon for impacts. Robin Hood calculates all their SROIs in terms of lifetime trajectories of earnings, allowing them to compare the potential income gains from an early education intervention with the potential income gains from a job training program for disconnected youth, based on the best and latest economic and social science research in the field. Again, these estimates rely on the availability of recently completed longitudinal studies performed in the same or a similar labor market for the assumptions to hold.
Initially, building on SROI methodology developed by the Robin Hood Foundation (Weinstein 2009), we planned on producing a 30-year SROI for each of the JVS programs. However, the JVS Board of Directors correctly had several misgivings about the 30-year time horizon for benefits. The Board wanted to see 1-, 2-, 5-, and 10-year SROIs for each program and then the longer time horizons of 30 years could be put into perspective. In the end, we used the same time horizons for all the programs, providing an SROI for years 1, 2, 5, and 10 as requested by the Board. In calculating these SROIs, we use agency data for calculating SROIs in years 1 and 2 and assume that the SROIs captured in years 1 and 2 hold constant for the years going forward. This assumption is commonly applied in CBAs and can illustrate at what time point a program begins to break even and/or create value.
As the study results went to press, we refined the duration of impacts question, aiming to clarify distinctions about the impact generated by each program. As recommended by the new SROI guide put out by the SROI Network in the UK in 2012, we reworked the report to emphasize the importance of accounting for duration of impacts (Nicholls et al. 2012). Indeed, it is well accepted that attaining a college degree (in this case via the Bridges program) changes lifetime earnings trajectories. Thus for Bridges to College, a 30-year time horizon was correct. However, job training (Skills) and rapid employment (the JVS Refugee Assistance program) lack the same “lifetime” trajectory change of a college degree. Without accounting for duration of impact, the SROIs were fairly close together in value and our approach seemed to overstate the long-term impact of Refugee employment program (see Table 4 for SROI calculation of Refugee employment program) and undervalue the Bridges education investment. Thus, we adjusted our estimates over time so that over a 30-year horizon, for benefits of a shorter duration, they would only be calculated for 5 years and then zeroed out. As a final step incorporating all of this thinking, we developed an aggregated cross program JVS-organizational level SROI (see Table 5). Including the duration of benefits attribution more clearly allows JVS to compare programs on social value creation and to more accurately predict an aggregate JVS SROI across programs. The JVS aggregate SROI is calculated by taking the total monetized benefit across programs, monetized values that reflect varying impact horizons, and divide by total costs. This yields a JVS across program SROI of $20: $1, meaning for every dollar invested in JVS, a social return of $20 is created.
Although Robin Hood as a foundation must adjust their benefit calculation (what they call the “Robin Hood factor”) according to the proportion of their investment (i.e. they may fund 10% of a program), because JVS was allocating all of the resources, we did not make such a proportional adjustment.
Present value discount
Finally, the lack of standards about how to apply discounting principles, drawn from both the public accounting and the corporate finance world, to discount the future social value flow to a present value is another area where the current array of SROI methodology falters. For public funders, the practice of using the risk-free rate as the discount rate for future social value is fairly standard practice in CBA and thus applied, the discount rate represents the opportunity cost for the investment in a chosen social initiative. It has been recommended by leading academics in the field that given a lack of alternative guidelines for determining discounting, that the risk-free rate be adopted for SROI discounting as well (Olsen and Nicholls 2005). In the JVS project, we followed this approach and used the 30-year US Treasury bond rate as the discount rate. Adopting the risk-free rate follows the approach used in the public sector for CBAs and helps create a uniform standard for the messy business of discounting social value.
However, for high net worth individuals from the financial sector moving into philanthropy or impact investing, there may be frustration about the lack of utility in this current practice of discounting in the world of social value creation. While using the risk-free rate might adequately estimate the opportunity costs for not investing the money elsewhere using the safest possible alternative investment, applying the risk-free rate uniformly to all projects reduces the analytic leverage that the discount rate provides in a corporate finance context, where discount rates typically include a risk premium with steeper discounting for higher risk projects. For example, it might be useful to have risk premiums built into the discount rate when considering the social impact of two different climate change interventions, one with potentially much larger impact than the other, but for which the technology is riskier. An investor for whom social impact is the bottom line would presumably want to incorporate a way of discounting to the present value that allows for steeper discounts for the riskier project for more accurate and compelling comparisons across investments (see Olsen and Nicholls 2005 for description of a discounting approach that does take differential risk of social returns into account).
Fragility of the SROI calculation: Yale MBA assignment
To highlight the inherent instability in SROI estimation at this stage of development of the field, we present a short overview of the results from a Yale School of Management MBA class assignment. Five groups of Yale SOM students1 calculated SROIs for the same case with a defined set of stakeholders, and the results show how small methodological decisions about the list of benefits ascribed to each stakeholder, cost allocations and decisions about how to calculate the discount rate can result in very different SROIs even within narrow parameters (see Table 6). The MBA assignment was to calculate an SROI for the social business enterprise project of a workforce development organization called Asian Neighborhood Design (AND). The case described an initial investment in the social enterprise businesses and the exercise asked for the students to calculate an SROI for the project that focused on the returns for the social goals, namely, work integration and workforce development for disadvantaged program users (primarily ex-offenders, formerly homeless individuals, and single mothers working their way off welfare). All five teams narrowed the focus of their valuation to benefits accrued to the same two stakeholders (the government and the program users). Most teams calculated the benefits to these stakeholders based on welfare savings to government (data provided by case on these figures), income taxes paid by users to the government (now that they are employed in the AND businesses), and earnings paid to the users from their employment. In addition, one team included the savings to the municipality for those no longer in training based on per capita job training estimates publically available for that time period (assuming that were they not working in AND’s business for wages, they would be participating in a traditional public job training program), and another team recalculated all the welfare savings to include city level savings to General Relief as well as the federal benefits, and declined to include projected user income taxes citing the fact that for workers earning below certain wages, there is no income tax. This team also broke out these welfare benefits by user population and weighted the public benefit bundles based on projected program enrollment of each type of user.
In a traditional CBA approach, these social benefits would be calculated net of social costs to both government and consumer groups. Contrastly, in the blended value approach to SROI developed by REDF for work integration social enterprises and used by the students in this assignment, the benefits to public sector and workers are calculated net only the social costs of training incurred by the social purpose business itself. According to the CBA method the increase in taxes paid to government would be canceled out by the costs borne by the consumers paying them. For the SROI, these government tax receipts, reduced welfare payments, and the increased earnings for workers are, instead, summed on the benefit line. For both the REDF and the Robin Hood SROI methodologies, the costs are narrowly focused to the investment in the businesses or charities with the aim of calculating a return on that investment rather than a net societal benefit calculation.
In the Yale SOM student projects, there were slight differences in cost allocations across the teams. Some included only half the initial investment in the businesses to account for the fact that the businesses had dual goals (business and social – this was justified in the case as the organization also calculated a financial ROI for the businesses) and teams varied about whether or not they included subsidies from foundation grants to the businesses as part of the cost. From the traditional CBA perspective the foundation grants would lower the costs to the private investor in the business but at the societal perspective, it would simply constitute a shifting of cost. The SROI methodologies could be interpreted similarly, however, the range of approaches and discretion available within the SROI methodologies is reflected, one could argue, in the student confusion about which approach to take. The case had numbers going out 8 years, and most adopted this time horizon, but one student projected further to match the time horizon with her attribution of impact estimates.
Finally, all followed approaches to calculating the discount rate found in the literature; but everyone calculated this differently which highlights the lack of a standard method for how to do this. The basic recommendation in the literature on how to discount future benefit flows for SROI is to use the risk-free rate, which is the US would be either the Treasury note or Treasury bond rate (which a 2005 publication lists as typically ranging from about 2.5 to 5%; Olsen and Nicholls 2005). But even with this narrow standard, one team interpreted this to mean that 2.5–5% was standard rate, while the other teams applied either the 10-year T-note or the 30-year T-bond rate. Further, even for those choosing the 10-year T-note rate, some just picked the first month of the year and applied that rate while another took the average for the monthly figures published for the T-note rate over the 12 months of figures. Finally, one team member followed the approach for calculating the cost of capital and included both the risk element of the weighted cost of capital calculation and a discount for inflation using the CPI index that takes into account the inflationary aspect of the time value of money. As the final SROI estimates indicate (see Table 6), all of these small decision points resulted in large differences in the final SROI which ranged from a return on $1.16 for every dollar invested to a return of over $6 dollars for the same project.
At a time when both public sector funders and philanthropic donors have increased their demand for performance measurement, the SROI promises the potential to compare across interventions and direct funding to programs and organizations with higher performance outcomes. For nonprofits and social enterprises, the SROI communicates the social value created by the organization in the language du jour – the language of impact and of investment. Further, the SROI can enhance internal performance management as the exercise can illuminate performance differentials across programs in ways that might not be otherwise apparent. For example, the final SROIs for two skills based training programs, one preparing currently unemployed or underemployed low-skilled workers for jobs in the Culinary Arts and the other, preparing clients for jobs as certified nurse’s assistants, illustrated that the CNA training program outperformed the Culinary Arts program in terms of the earnings trajectories of the graduates even though the starting wages coming out of the training were lower for the CNAs. This had to do, in part, with more frequent salary upgrades and higher retention over time for the CNAs, trends that the SROI made visible.
As reviewed above, the methodological dilemmas highlighted in this paper (Which stakeholders to include? How to monetize social value? How to determine counterfactual, duration, attribution? What rate to choose to discount future value flows to present value?) have been flagged and wrestled with in publically transparent ways by the early champions of the SROI approach all of whom have been open about not only their methods but also their challenges along the way. This field building work has resulted in an increasingly refined set of standardized approaches for performing the analysis developed by organizations like REDF, Robin Hood and SROI Network-UK and shared with the rest of us. However, the Yale SOM student assignment illustrates that even when assessing the same case, small variances in interpretations about how to approach the calculations can result in big differences in the final SROI calculation. Additionally, given differences in cost calculations, the social returns of an SROI may vary considerably from those resulting from a traditional CBA. Even if funders set guidelines to reduce this interpretative license, such as restricting the scope of valuation, the allocation of cost, and the discount rate, the JVS case highlights the many micro-decisions that shape the calculation, particularly when agency data are thin. Standardization at this level of decision making would be much harder for a funder to implement.
Secondly, and perhaps more seriously, SROIs do not provide any assurance of actual impact. While it is true that discounts for the counterfactual (what would have happened if the participant had not attended the program) are part of the methodology and based on RCT studies on similar program interventions with similar populations in similar settings. The lack of success in replication studies of even the most successful programs (Miller et al. 2005) suggests caution in accepting the assurances that such counterfactual deductions provide. Further, this methodological approach to estimating a discount for the counterfactual ascribes impact to current efforts using control group findings from studies conducted in the past. At the very least, there needs to be very rigorous updating of the assumptions based on ongoing RCT studies in the area of intervention.
Finally, and most importantly, because RCT studies on the focal program itself are not typically the basis for the SROIs, it must be remembered that for all the rigorous manipulation of data and evidence required to calculate the SROI metric, it is best understood as an approximation of impact, and interpreted accordingly. While in the JVS case, the estimates of impact stayed close to the data and had a highly relevant recently completed RCT study to draw upon, these estimates are not meant to be interpreted as commensurate or comparable to the findings from an RCT study. This fact, that the valuation methodology relies on careful, rigorous, empirical work to arrive at a very precise quantitative metric meant to be interpreted qualitatively as a metaphor for impact, remains one of the more confusing and controversial aspects of the SROI approach.
Funders considering the SROI tool have to decide whether the payoff for rendering comparisons across potential social investments is worth the investment in the learning curve to develop standards, in the training necessary to staff the evaluation unit with employees with deep knowledge about the limitations of the SROI metrics and how to interpret them, and in the infrastructure required to effectively support prospective grantees in adopting the SROI reporting requirements. For the nonprofit or social enterprise, adoption of the SROI metric also requires some reflection on the tradeoffs between the effort involved to develop the correct ratios according to an SROI approach that best meets the organization’s mission, and the challenge of communicating the value of the SROI in a landscape where the variance between SROI approaches requires extensive interpretation and methodological explanation for organizational stakeholders. Increasingly, philanthropic donors and individual social investors may be tasked with the challenge of understanding the differences among SROI metrics across the various organizations they support, each calculated according to a slightly different approach.
This is all the more problematic because of the nature of the SROI metric itself. After all the carefully considered analysis, the final SROI is reported in a single dollar amount returned for each dollar invested; this simple pecuniary metric connotes a precision that masks the fragility of its calculation. The innate power of reducing a complex phenomenon to a monetary bottom line cuts both ways. If, for example, in the coming months, a competitor delivers a less conscientiously constructed SROI that results in a $30:$1 return, or if the JVS refines their own method over time as the SROI field evolves, it will be much more difficult to provide a concise explanation about why these numbers look different when the detail about the study we will most stubbornly retain is the bottom line $20:$1 SROI.
With all of these caveats in mind, recent events highlight the power of the SROI as a metaphor for articulating impact. Shortly after the release of the SROI study, JVS was approached by Social Finance to see if they were interested in pursuing a Social Impact Bond opportunity. Once an RFP was announced last spring, the JVS/Social Finance team presented a proposal to the Commonwealth of Massachusetts using much of the work of the SROI as a basis for their application. Just a few weeks ago, JVS learned that they were awarded the nation’s first Pay For Success/Social Impact Bond for adult education/workforce development. It is a $15 million project allowing JVS to expand services to 3,000 additional clients.
The SROI provides an approach to measuring impact that is of interest to nonprofits and their third party funders, all of whom continue to search for performance measurement metrics that lend themselves to ongoing performance management. Given the ubiquity of logic modeling, SROIs are attractive since they essentially build on the logic model by dividing monetized benefits (outcomes) by the cost of inputs. Further, in a scarce funding environment where government support for social programs is under budgetary pressure, SROI holds additional appeal by translating social value into economic terms and providing an investment framing that appeals to third party funders. However, as the JVS case study and the Yale student case studies highlight, there can be serious methodological challenges to monetizing benefits, and even thoughtfully developed, empirically based estimates of SROIs done by smart, committed individuals working off the same data can result in very different final numbers. For the funder or the nonprofit considering the SROI as an option for performance evaluation that draws on an investment framework, may this article serve as both a guide to the landscape of SROI and a review of the key challenges of going down this path at this stage in the field’s development.
Brest, P. 2012. “A Decade of Outcome-Oriented Philanthropy.” Stanford Social Innovation Review Spring:42–47. Google Scholar
Brest, P., H. Harvey, and K. Low. 2009. “Calculated Impact.” Stanford Social Innovation Review Winter:50–56. Google Scholar
Collins-Camargo, C., B. McBeath, and K. Ensign. 2011. “Privatization and Performance-Based Contracting in Child Welfare: Recent Trends and Implications for Social Service Administrators.” Administration in Social Work 35(5):494–516. Web of ScienceCrossrefGoogle Scholar
Cooney, K., and K. Lynch-Cerullo. 2012. “Social Return on Investment: A Case Study of JVS.” Boston, Jewish Vocational Services, Greater Boston. Google Scholar
Ebrahim, A., and C. Ross. 2010. “The Robin Hood Foundation.” Harvard Business School Case # 310031. Google Scholar
Gair, C. 2002. “A Report from the Good Ship S.R.O.I. San Francisco.” Roberts Enterprise Development Fund (REDF). Google Scholar
Lynch-Cerullo, K., and K. Cooney. 2010. “Moving From Outputs to Outcomes: A Review of the Evolution of Performance Measurement in the Human Service Nonprofit Sector.” Administration in Social Work 35(4):364–88. Web of ScienceCrossrefGoogle Scholar
Miller, C., J. M. Bos, K. E. Porter, F. M. Tseng, and Y. Abe. 2005. “The Challenge of Repeating Success in a Changing World: Final Report on the Center for Employment Training Replication Sites.” New York, MDRC. Google Scholar
Nicholls, J., E. Lawlor, E. Neitzert, and T. Goodspeed. 2012. “A Guide to Social Return on Investment.” UK, The SROI Network. Google Scholar
Olsen, S., and J. Nicholls. 2005. A Framework for Approaches to SROI. San Francisco, CA: SVT Consulting. Google Scholar
Shaffer, M. 2010. “Multiple Account Benefit-Cost Analysis: A Practical Guide for the Systematic Evaluation of Project and Policy Alternatives.” University of Toronto Press. Web of ScienceGoogle Scholar
Weimer, D. (ed.). 2008. Cost-Benefit Analysis and Public Policy. New York: John Wiley & Sons. Google Scholar
Weinstein, M. 2009. Measuring Success: How Robin Hood Estimates the Impact of Grants. New York: Robin Hood Foundation. Google Scholar
About the article
Published Online: 2014-09-27
Published in Print: 2014-10-01