Why should Social Return on Investment be avoided at all costs?


Why should Social Return on Investment be avoided at all costs?

Not-for-profit (NFP) organisations are often both innovative in the way they deliver services and responsive to the needs of clients. As a result they are increasingly providing social services that were previously the domain of the public sector.

Coupled with the rise of social impact investing and the competition of ideas that it creates, NFPs are being asked by funders — both government and philanthropic — to demonstrate the economic value they generate.  

Instead of embracing established impact and economic evaluation tools, some NFPs have latched on to an increasingly popular concept known as social return on investment (SROI).

SROI is sometimes advanced as a legitimate addition to the portfolio of economic evaluation methods. It is not. It has been suggested in some quarters that SROI should be used in the public sector to assist in spending decisions. This would be a mistake.


Described by a practitioner as “a form of stakeholder-driven evaluation blended with cost-benefit analysis tailored to social purposes”. An SROI analysis rests on NFP’s stakeholders providing intuitive opinions on the impact of a program or intervention the organisation has run for its beneficiaries. An assessment of financial value is assigned to the changes stakeholders have identified by using a combination of stakeholder perceptions and reference values. 

The liberal misappropriation of the language, but not the rigour, of mainstream economic evaluation lends SROI credibility among casual observers that it does not deserve.


A SROI and a CBA share a common goal of seeking to monetise the impact of the benefits of an intervention and compare them with the costs of implementing it. However the method by which either benefits are attributed and monetised and costs are assigned in an SROI has no comparison to that of a rigorous cost-benefit analysis.

SROI practitioners engage stakeholders in the cost and benefit assignment process. SROI practitioners argue that this method ‘increases the depth of analysis’ as it ‘engages more broadly with those experiencing any change than traditional cost-benefit analysis’.

None of these estimates are subject to empirical test or verification. What is deemed to be important is what stakeholders themselves feel is the impact and value of the program.

In some instances SROI practitioners may attempt to moderate some of the estimated values based on reference databases, but there is little evidence that they have any more objectively accurate ideas than the stakeholders. Nor might they limit the benefits to tangible cost savings from reduced expenditure on public services.

From experience in psychological and economic theory, we know that this sort of introspection is subject to biases, especially optimism bias, wish-fulfilment and the failure of counterfactual reasoning.


The purpose of economic evaluation is to enable decision makers to make the best use of funds by allocating resources — which may be time, effort or money — to the alternatives that generate the optimal economic result. Deciding how to allocate resources involves examining a range of criteria including value for money, risk of failure, as well as meeting ethical and equity expectations. SROI cannot provide guidance for any of this.

We should note that the optimal result will include a mix of attributes including: that the outcome meets a societal standard of value for money; is a secure investment; that its benefits reliably exceed costs; and also that the outcome meets ethical and equity expectations.

Sometimes the best idea might be found in another service area, for example, some early childhood interventions have been shown to reduce interactions with the criminal justice system in adulthood.

A SROI methodology does not have the capacity to factor this because it assumes that the stakeholders possess the scope of vision to understand the better use of resources in another policy area.

In psychology and medicine, meta-analysis was developed to apply consistent methods for estimating the average effects of interventions rather than relying on a clinician’s self-interested best guess about what works. 

Pseudo-economic analyses are not a substitute for or means of recovering from inadequate experimental methods. Regardless of the economic methodology advanced, if the assessment of a policy intervention did not involve randomly allocating clients to either a treatment or a control group — a randomised control trial — then no form of economic analysis of the outcomes will be valid.

Finally not every outcome needs to be monetised. Provided appropriate experimental methods have been used, it should be completely satisfactory to show that participants are content or happier than their control group peers. Appealing to a ‘better’ economic outcome risks missing the point completely.


Economists and social policy analysts have made great strides in deriving plausible monetary values for social outcomes in the last decade. Most economists reviewing spending decision are content to calculate measures like net present value (NPV), benefit cost ratio and an estimate of the probability that an instance of an NPV falls within a specified range. Ethical and equity considerations are the province of politicians, representing the public domain.

Calculating the cost benefit of social outcomes has until recently been a protracted and challenging task, since the calculation of a CBA entails monetising variables like the value of a life or the value of injuries suffered by a victim of crime.

The emergence of repositories of effect sizes — that reliably and validly measure outcomes of a wide range of interventions — and the development of libraries of shadow prices means that techniques involving little more than wild guesses are redundant dead ends.

A properly constructed CBA might use the results of a meta-analysis alongside marginal costs, shadow prices and discount rates as inputs. Economists construct these elements subject to transparent rules. Important findings in any scientific discipline depend on reproducible results.

In comparison, an SROI uses stakeholder perceptions as a proxy for a base rate of change. This means that SROI lacks the ability to objectively determine the impact of a program/intervention. Without this it is at best wishful thinking.

Accurately calculating the benefits associated with a social intervention is something that economists have made great strides in capturing in recent times. In economic evaluation, the perspective that is adopted significantly impacts what costs are deemed relevant. For example, a ‘government or payer perspective’ would be interested in the costs that a social intervention can avoid measured by interactions with social services, health, welfare and criminal justice systems and reduced costs to third parties — for example, victims of crime.

The cavalier assignment of monetary values to ‘personal experiences’ in an SROI analysis may help an NFP understand how its services can affect stakeholders, but even assuming they are accurate they have little relevance to costs that they may have saved the state, and by extension taxpayers.


Comparisons of SROI measures achieved by different programs and organisations are meaningless and should be avoided at the risk of seeming to be innumerate and ill-informed.

Since the purpose of CBA is to provide cross-program, even cross-sectoral, evaluation of resource utilisation, SROI provides funders with no valuable information that could not be derived more reliably in other ways. With its cheap mimicry of robust economic concepts and language, SROI is a free-rider on the credibility of tried and tested methods of economic evaluation.

NFPs — and their private funders — should be free to use whatever methods they wish to assess the impact of their programs. However, for public servants and NGOs receiving public funding, using zombie economics to allocate public funds is irredeemably irresponsible.


Where can we remember them?


Where can we remember them?

Ninety-seven years ago to the day, after more than four years of continuous fighting, the guns of the Western Front fell silent and an armistice was signed. In the years following the end of the First World War the moment when hostilities ended has become synonymous with the remembrance of those who died in the world wars.

To commemorate Remembrance Day this year we’ve used data from the Commonwealth War Graves Commission (CWGC) to plot the location of cemeteries and memorials to Commonwealth soldiers who died whilst on active service during the First and Second World Wars. These soldiers served and died in places far from home, from Gallipoli to Nigeria.

From Murmansk to Turkmenistan and Gaza to Waziristan every war memorial tells a story. We encourage you to use our app to uncover one. You can select one or more countries and run our animated map for either the First or Second World War. Each circle represents the number of casualties that are commemorated at that location for that date per country, with the size relative to the number of casualties. If you pause the animation and click on a circle, you can zoom to locate the memorial and follow a link in the popup to the CWGC website to learn more about it.    

By arranging these data as a time series we sought to show the scale of the daily casualties that were suffered during the stalemate battles of the Western Front. Some of these are well known, others are not. In preparing the data we began investigating various spikes in the time series and began searching for their proximate causes. 

We found it surprising to learn that Australia’s darkest day in World War I occurred at the unheralded Battle of Broodseinde with 6,423 casualties. United Kingdom forces, by contrast, suffered those sorts of casualties with alarming regularity.   

The disparate theatres of conflict and smaller scale battles in the Second World War don't show the same pronounced spikes as the First World War data. A notable exception is the Fall of Singapore, where the British Indian Army suffered their worst casualties of the war. 

The app highlights campaigns and battles overlooked by history. We read into the exploits of the UK’s Fourteenth or ‘Forgotten’ Army after observing the heavy casualties that they sustained during the Burma Campaign. 

Some historical quirks emerged from the data, we were particularly surprised to learn that the British Indian Army kept fighting following the Japanese surrender, as they were called into fight against pro-Independence Indonesian forces in the Battle of Surabaya during the Indonesian National Revolution.

On data limitations

The data for this app was sourced from the CWGC casualty database which records the names and place of commemoration of the over 1.7 million men and women who died serving in British Commonwealth forces during the First and Second World Wars.

The database also records the details of 67,000 Commonwealth civilians who died ‘as a result of enemy action’ in the Second World War.

Readers may note that the total casualty counts in the time series charts don’t appear to reflect the total casualty counts reported in official figures nor were significant battles fought at every location indicated on the map. There are multiple reasons for this.

As our app is based on a time series we excluded all records that did not report a date of death. This resulted in us excluding memorials to those who were missing in action where no date could be provided. In other cases where an obviously incorrect ‘dummy date’ was provided were also excluded – for example all of the Indian soldiers commemorated at the India Gate in Delhi for their service in the First World War were reported as having died on the first day of hostilities.

The map was designed to show the locations where individuals were commemorated, often that is close to where they may have perished and as a result the map can roughly approximate the locations of the heaviest casualties. However this is not always the case.

In many cases individuals are commemorated far from where they perished, this is particularly apparent for some naval casualties or for those who died during captivity as prisoners of war. For example the HMAS Sydney which was lost with all hands on 19 November 1941 off the coast of Western Australia is commemorated at the Plymouth Naval Memorial in the UK.

The CWGC records commemorations based upon which national army an individual served with rather than the citizenship of the individual. Since many colonial volunteers enlisted in locally raised regiments such as the King’s African Rifles or the Malay Regiment that served under British command they are commemorated as having served with United Kingdom forces. Likewise American volunteers who enlisted with Canadian forces to fight in the First World War would be commemorated as having served with Canada.  

With these limitations in mind, we hope you enjoy

Lest we forget. 


Produced by: Andrew Taylor, Dave Taylor and Louis Durant.

Data source: Commonwealth War Graves Commission

Platform: For the technically minded our app was built using R and Shiny using the Leaflet and Dygraphs packages.

If you have questions about our app or want report an error drop us a line at: info@archerfish.net or flick us a tweet @archerfish.


How can cost benefit analysis be used to prioritise social policy spending?


How can cost benefit analysis be used to prioritise social policy spending?

Before economists came along and formalised the concept, people had long tried to weigh up the costs and benefits of pursuing different choices. The American polymath Benjamin Franklin was one of the first to document this in a letter to a friend:

“When difficult cases occur, they are difficult chiefly because while we have them under consideration, all the reasons pro and con are not present to the mind at the same time. To get over this, my way is to divide half a sheet of paper by a line into two columns; writing over the one ‘Pro’, and the other ‘Con’.”

Franklin nailed the underpinnings of a modern cost-benefit analysis (CBA) where the benefits of the decision are weighed against the costs - with whichever side weighing the most winning. All of us make decisions like this every day. Whether they be choices about whether to take the car or public transport to work, or cook dinner or eat out we are weighing up benefits we may gain in time and enjoyment versus the additional expense.

What is a cost-benefit analysis?

In theory, undertaking a CBA seems as straightforward as Franklin suggests, you sum up the costs and benefits in separate columns and see which is better off. In reality it can be more complex. What if the benefits and costs are not immediately obvious? Or occur simultaneously? How do you take into account an intangible benefit that might occur five years down the road?

These are overcome by expressing benefits and costs in monetary terms and discounting them by the time value of money. This allows costs and benefits, which generally occur at different time periods, to be expressed in terms of their net present value. By monetising benefits a CBA allows decision makers to assess whether a policy intervention is a sound investment, as well providing the ability to compare it with competing policy options - developing a new car ferry might look like a good investment, but a new bridge might be an even better one.

In public policy, CBAs have been used to assess investment decisions relating to major capital projects like building a new motorway or an extension to an airport. In recent times developments in economic theory and practice in the past decade have meant that CBA has become not only an accessible tool, but a preferred method, of allocating scarce investment resources across worthy causes in social policy too.

How do you put a number on that?

In the context of social provision, a CBA is based on a rigorous evaluation of a programme’s actual impact on an outcome of interest - for example, reoffending among released prisoners. The results of an high quality evaluation (like a randomised-control trial) can be compared to a meta-analysis of a systematic review of the literature.

The impact that the program has on reoffending for example can be translated into the economic benefit that it generates for state, for the released prisoner and for society more widely by examining the relationship between lower reoffending rates and other outcomes that have a financial impact. In this case that might mean benefits to the state from reduced police, courts and corrections costs that are no longer incurred, benefits to society from reduced victimisation costs from lower crime and benefits to the individual from higher lifetime earnings as they are now employable.

Why is it better than other methods?

Whereas other economic techniques like cost-effective analysis or cost utility analysis - popular methods used in healthcare and pharmaceutical decision making - compare two or more interventions based on a common unit of measurement. The monetisation of benefits in CBA allows decision makers to compare the relative costs and benefits across a suite of interventions in different policy areas. This attribute allows governments to be able to determine, for example what proportion of resources should be spent on early intervention and prevention versus treatment‑based interventions.

In econospeak the difference between effectiveness and efficiency is akin to the difference between doing the right things and doing things right. To improve efficiency, you must first be doing something effective. By using money as a metric it is possible to compare the benefits across a range of activities. It also allow the benefits from interventions that flow across sectors to be included, bolstering the case for investment.

In the U.S., it has been demonstrated that the Nurse Family Partnership - which provides maternal and early childhood support to low income mothers - has positive health benefits for the mother as well as long term benefits educational, employment and criminal justice benefits for their child.   

How can it be used to prioritise social policy spending?

In Australia we are facing increased demand for services, particularly in the health and justice space. Facing a situation like this it would be prudent to try and seek to get a greater efficiency dividend from the current resources we currently expend. However when it comes to making spending decisions we are largely flying blind. While they are no doubt made with good intentions, they seldom rely on a rigorous assessment of their relative effectiveness or efficiency at delivering tangible results.

It need not be this way. The Washington State Institute for Public Policy has been supporting the Washington State Legislature by providing impartial advice on the impact of policy spending decisions for over 30 years. Legislators have used the results of the Institute’s rigorous modelling to justify what could be considered radical changes to justice policy which have saved money and improved outcomes.


What can we learn from Baseball?


What can we learn from Baseball?

This article was originally published by New Matilda

Successful businesses generally don’t drop millions of dollars on a new idea without assessing if its likely to be a good investment or not. It would be fair to say that most Australian’s assume our government operates in much the same manner when divvying up the budget. In reality, spending decisions are mostly based on a combination of good intentions, ideology, horse trading, gut instinct and inertia.

We are no wiser today about how well our government spends our tax dollars than we were twenty years ago. We know precious little about the impact of all but a tiny fraction of what our government spends to increase population health, reduce crime, improve social services and lessen indigenous disadvantage.

That is not to say that all government expenditure in these areas is ineffective. Its just that in a wide range of issues in health and social policy, the only way of separating what we know to be good policy from the bad is expert opinion - and everyone considers themselves to be an expert.  

In the current budgetary climate many potentially worthy policy programs are facing the axe. In the absence of robust evidence supporting their retention, much of the decision-making surrounding what should stay and what should go is driven by ideology. In this new age of austerity we need to make every dollar count by spending on things that work to reduce disadvantage and inequality rather than things we just assume do.

We should take a leaf out of the playbook of Billy Beane, the manager of the Oakland A’s baseball team who was immortalised in Moneyball. Billy Beane managed to change baseball by using data science to determine where to get the best value out of his teams’ relatively meagre budget. He boiled down the success of baseball to a number of key measurements and build a championship contending team based around undervalued players with these characteristics.

Rather than continuing our time honoured approach to throwing money at the latest policy problem and hoping that it solves itself, we need a new approach. In order to reduce entrenched disadvantage we need to shift resources toward solutions that get proven results.

Social science research has shed light on what works to improve health, education and employment outcomes for disadvantaged individuals. Thanks to the work of groups like the Campbell and Cochrane Collaborations we know for example, that children of low income parents that attend pre-school are more likely to complete high school than those that don’t. However, almost all of this research is produced overseas and doesn’t always translate neatly into an Australian context.

In the United States there has been a concerted effort amongst politicians, public servants and philanthropists to increase the use of evidence, data analytics and linked data to demonstrate that policy programs make a difference to their recipients. Their counterparts in Australia are not without their champions, however much more could be done to advance this approach.

Evaluations of policy programs need to be adapted so that they focus on relevant outcomes to individuals rather than focusing on compliance and coverage. There are tools available to determine what public policy works and what doesn’t - randomised control trials and other nifty techniques have been used around the world to determine the impact of programs to fight poverty, improve education and reduce unemployment. Yet, despite the best efforts of Andrew Leigh and others to promote their use, their uptake in social policy analysis in Australia has been limited.

Taxpayer dollars should be spent on solutions that use evidence and data to get better results. Ongoing evaluation and data analytics can be used to continuously improve both the quality and impact of programs. This can also reduce duplication and slash red tape that strangles innovative new ideas.

Regardless of whether someone favours ‘big government’ or ‘small government’ it would be hard to find anyone who lacks the conviction that a government of any stripe shouldn’t be spending tax dollars on things that don’t work.

Every dollar spent on an ineffective program is a dollar that can’t be spent on something that makes a measurable difference. The budget should be directed away from policy programs that don’t work and reinvested in those that can help those in need of assistance to make greater and faster progress toward overcoming challenges.

In Australia, those in charge of our budget could learn a lot from Billy Beane’s approach to baseball. This period of belt tightening could be used as an opportunity to refocus our attention on what works to get better results. We need to be spending on the most effective and efficient ways to reduce inequity and disadvantage. We need to be playing Moneyball.  

Dave Taylor is an economist with Archerfish. You can follow him on Twitter @davetayl_r