Not-for-profit (NFP) organisations are often both innovative in the way they deliver services and responsive to the needs of clients. As a result they are increasingly providing social services that were previously the domain of the public sector.

Coupled with the rise of social impact investing and the competition of ideas that it creates, NFPs are being asked by funders — both government and philanthropic — to demonstrate the economic value they generate.  

Instead of embracing established impact and economic evaluation tools, some NFPs have latched on to an increasingly popular concept known as social return on investment (SROI).

SROI is sometimes advanced as a legitimate addition to the portfolio of economic evaluation methods. It is not. It has been suggested in some quarters that SROI should be used in the public sector to assist in spending decisions. This would be a mistake.


Described by a practitioner as “a form of stakeholder-driven evaluation blended with cost-benefit analysis tailored to social purposes”. An SROI analysis rests on NFP’s stakeholders providing intuitive opinions on the impact of a program or intervention the organisation has run for its beneficiaries. An assessment of financial value is assigned to the changes stakeholders have identified by using a combination of stakeholder perceptions and reference values. 

The liberal misappropriation of the language, but not the rigour, of mainstream economic evaluation lends SROI credibility among casual observers that it does not deserve.


A SROI and a CBA share a common goal of seeking to monetise the impact of the benefits of an intervention and compare them with the costs of implementing it. However the method by which either benefits are attributed and monetised and costs are assigned in an SROI has no comparison to that of a rigorous cost-benefit analysis.

SROI practitioners engage stakeholders in the cost and benefit assignment process. SROI practitioners argue that this method ‘increases the depth of analysis’ as it ‘engages more broadly with those experiencing any change than traditional cost-benefit analysis’.

None of these estimates are subject to empirical test or verification. What is deemed to be important is what stakeholders themselves feel is the impact and value of the program.

In some instances SROI practitioners may attempt to moderate some of the estimated values based on reference databases, but there is little evidence that they have any more objectively accurate ideas than the stakeholders. Nor might they limit the benefits to tangible cost savings from reduced expenditure on public services.

From experience in psychological and economic theory, we know that this sort of introspection is subject to biases, especially optimism bias, wish-fulfilment and the failure of counterfactual reasoning.


The purpose of economic evaluation is to enable decision makers to make the best use of funds by allocating resources — which may be time, effort or money — to the alternatives that generate the optimal economic result. Deciding how to allocate resources involves examining a range of criteria including value for money, risk of failure, as well as meeting ethical and equity expectations. SROI cannot provide guidance for any of this.

We should note that the optimal result will include a mix of attributes including: that the outcome meets a societal standard of value for money; is a secure investment; that its benefits reliably exceed costs; and also that the outcome meets ethical and equity expectations.

Sometimes the best idea might be found in another service area, for example, some early childhood interventions have been shown to reduce interactions with the criminal justice system in adulthood.

A SROI methodology does not have the capacity to factor this because it assumes that the stakeholders possess the scope of vision to understand the better use of resources in another policy area.

In psychology and medicine, meta-analysis was developed to apply consistent methods for estimating the average effects of interventions rather than relying on a clinician’s self-interested best guess about what works. 

Pseudo-economic analyses are not a substitute for or means of recovering from inadequate experimental methods. Regardless of the economic methodology advanced, if the assessment of a policy intervention did not involve randomly allocating clients to either a treatment or a control group — a randomised control trial — then no form of economic analysis of the outcomes will be valid.

Finally not every outcome needs to be monetised. Provided appropriate experimental methods have been used, it should be completely satisfactory to show that participants are content or happier than their control group peers. Appealing to a ‘better’ economic outcome risks missing the point completely.


Economists and social policy analysts have made great strides in deriving plausible monetary values for social outcomes in the last decade. Most economists reviewing spending decision are content to calculate measures like net present value (NPV), benefit cost ratio and an estimate of the probability that an instance of an NPV falls within a specified range. Ethical and equity considerations are the province of politicians, representing the public domain.

Calculating the cost benefit of social outcomes has until recently been a protracted and challenging task, since the calculation of a CBA entails monetising variables like the value of a life or the value of injuries suffered by a victim of crime.

The emergence of repositories of effect sizes — that reliably and validly measure outcomes of a wide range of interventions — and the development of libraries of shadow prices means that techniques involving little more than wild guesses are redundant dead ends.

A properly constructed CBA might use the results of a meta-analysis alongside marginal costs, shadow prices and discount rates as inputs. Economists construct these elements subject to transparent rules. Important findings in any scientific discipline depend on reproducible results.

In comparison, an SROI uses stakeholder perceptions as a proxy for a base rate of change. This means that SROI lacks the ability to objectively determine the impact of a program/intervention. Without this it is at best wishful thinking.

Accurately calculating the benefits associated with a social intervention is something that economists have made great strides in capturing in recent times. In economic evaluation, the perspective that is adopted significantly impacts what costs are deemed relevant. For example, a ‘government or payer perspective’ would be interested in the costs that a social intervention can avoid measured by interactions with social services, health, welfare and criminal justice systems and reduced costs to third parties — for example, victims of crime.

The cavalier assignment of monetary values to ‘personal experiences’ in an SROI analysis may help an NFP understand how its services can affect stakeholders, but even assuming they are accurate they have little relevance to costs that they may have saved the state, and by extension taxpayers.


Comparisons of SROI measures achieved by different programs and organisations are meaningless and should be avoided at the risk of seeming to be innumerate and ill-informed.

Since the purpose of CBA is to provide cross-program, even cross-sectoral, evaluation of resource utilisation, SROI provides funders with no valuable information that could not be derived more reliably in other ways. With its cheap mimicry of robust economic concepts and language, SROI is a free-rider on the credibility of tried and tested methods of economic evaluation.

NFPs — and their private funders — should be free to use whatever methods they wish to assess the impact of their programs. However, for public servants and NGOs receiving public funding, using zombie economics to allocate public funds is irredeemably irresponsible.