Part 2 of Elyse Sainty’s letter from the frontline of Australia’s foray into social impact bonds in which she outlines her ‘thoughts and prayers’ for their future. In part 1, Elyse shared her insights around SIB myths and legends.
In the first instalment of this two-part article I tackled seven ‘myths and legends’ about SIBs. In this instalment, I set out some views on the role that SIBs could play in the evolution of a system that focuses on what is important, uses evidence to shape responses, and delivers for the community – and how to make the path to that vision a little easier.
1. The gold in them thar hills
In the era of the Facebook-Cambridge Analytica scandal and banks selling their customers’ spending data, a frustration in developing outcomes-based contracts is how difficult it is to use data for good. Privacy controls and ethics processes are essential, but there is certainly scope for streamlining and standardising – and the newly created office of the National Data Commissioner looks like a step in the right direction.
The gold in the hills is that outcomes-based contracts force, firstly, clear articulation of what outcomes you actually want. And then they encourage a focus on person-centred, multi-year measurement, rather than government department-centred annual expenditure. As an example, if an individual is exiting the out-of-home care system, how many days are they likely to spend in hospital over the coming decades? How many in gaol? On welfare? In public housing? And how likely is it that their own children will also enter state care? You need to understand the baseline trajectory to be able to set fair targets and put a value on change.
And then of course, outcomes-based contracts force you to measure what happened.
My sincere hope is that the specific demands of SIBs, and outcomes-based contracting more broadly, provide impetus toward greater rigour and more widespread use of ‘data for good’ across the broader service commissioning landscape. Using data helps you to:
- Identify groups that are likely to experience challenges in life, and quantify what those challenges will cost across multiple government departments – and governments.
- Make policy and funding choices that prioritise intervention in the areas of greatest need.
- Identify individuals within targeted sub-groups so that they can be proactively supported. (There are data-driven referral pathways to a program or intervention, so that there are fewer gaps or overlaps in services.)
- Track outcomes over time to find out whether the support made any difference, building a strong evidence base for what works, or doesn’t, in diverse circumstances.
None of that requires an outcomes-based contract, but the discipline imposed by contractual obligations can be a very useful thing.
2. Ready, fire, aim
The general pattern of SIB development thus far in Australia is something like the following:
- A government identifies a broadly defined problem (recidivism, children in out-of-home care, disengaged young people).
- Proposals are requested that target this problem.
- Proposals are developed (in a short period of time), with a raft of ‘best guess’ assumptions made about government baseline costs, counterfactual outcomes and the level of impact the program will generate. Outcome metric(s) and a payment structure are recommended as part of the proposal.
- A promising program is selected by the government to enter the joint development phase (JDP).
- The specifics of outcome measurement and payment structures are negotiated during the JDP. Data to support a counterfactual estimate (what is the current/expected level of outcomes for the target population?) is sourced and government savings estimates are refined.
- Legal docs are drafted, haggled over and executed.
- Investor capital is raised (if required).
- Service gets underway.
The problem with this approach is that things tend to grind to a halt at step 5. Ethics approvals are sought to link datasets. It is discovered that good baseline data is not available for a proposed metric(s). A different metric is proposed by one of the parties. The estimate of baseline service utilisation made during proposal development turns out to be overly pessimistic (so the level of potential savings is overly optimistic). The number of people that fit the proposed eligibility criteria is lower than expected. Or some combination of the above.
All of which leads to a reworking of the financial model, pressure on all parties to compromise on their starting positions, and a fair amount of frustration and wheel spinning. This is where the fair assessment that ‘SIBs are complicated and expensive’ mainly arises.
Underpinning this ‘ready, fire, aim’ approach has been an understandable inclination to keep options open so that a range of potential interventions are put forward, and proponents are given flexibility to devise a measurement and payment structure that best suits that intervention. (The extreme example is perhaps New Zealand’s ill-fated first foray into SIBs, under which a ‘market-led’ approach was adopted that dispensed with the first step – identification of a target problem – altogether. This approach was found by a subsequent review to have “added time and complexity to the procurement process”.)
Having been down this path several times, my heartfelt plea to any government considering undertaking outcomes-based contracting is to do the cost baseline, metric selection, counterfactual estimate (for a specific target population) and outcome valuation work before releasing a Request for Proposal. This will sacrifice some flexibility but will result in better informed proposals and a significantly quicker (and cheaper) development process.
This approach is inherent in rate card structures (which stipulate a maximum payment-per-event), but it isn’t only rate cards that solve the problem (and I believe they create some other issues). Here is a simple, made-up example to illustrate the sort of information I would love to see in every Request for Proposal:
Summing it up: ‘ready, aim, fire’ is a better approach. My firm belief is that the more this sort of ‘pre-analysis’ is conducted, the easier it will get. This information would also be of great use to organisations who don’t participate in the joint development phase, providing them with a yardstick against which to measure their own performance.
3. Sledgehammers and nuts
Because they are new, untested and often high profile, SIBs can be subject to scrutiny, approval processes, evaluation and controls that are materially more onerous than those applied to similar programs that are funded through more traditional procurement processes.
There are already signs that the rhythm of SIB development and management is starting to settle as experience is gained across the country (for example, the form of legal contracts is becoming more of a known quantity). This will certainly help to reduce transaction time and cost.
One of my ‘prayers’, though, is that the next few years bring an examination of where the approach can be further streamlined, so that we don’t use sledgehammers to crack nuts.
As an example, I have developed a strong view on the use of control groups in the context of outcomes-based contracts. In an ideal world they are clearly the best way to determine the counterfactual against which program performance is measured – but the world is frequently not ideal. Control groups can cost a lot to construct, significantly adding to transaction overheads. There may not be a large enough group to provide a fair comparator in small target populations. And in a complex and ever-changing policy environment it can be difficult to avoid the ‘confounding’ impact of other services. This doesn’t mean that control groups aren’t a vital tool in the research arena, but for SIBs they can be a case of the perfect being the enemy of the good.
There is a significant risk that the cost of the extra layers of control and evaluation that have been imposed on SIBs during the ‘learning phase’ are seen as a permanent – and unattractive – feature. It would be a great shame if that prevented further development in this space.
4. The question of scale
A lot of the enthusiasm for the development of the ‘SIB market’ has been driven by impact investors who – admirably, wonderfully – want to see their capital invested in instruments that do something positive for the world. There has been a fair amount of focus on creating pools of capital with something of a “if we provide it, they will come” assumption. However, in our experience (and provided that the risk/return positioning is reasonable), sourcing capital is perhaps the least challenging part of implementing a SIB. At this point, supply of capital materially outweighs demand in the SIB world.
This unsatiated investor appetite regularly leads to discussion about the need to increase scale. But there is an important question here: what does ‘scale’ mean? Is it:
- Larger (capital) SIBs?
- More SIBs?
- More outcomes-based contracts?
- Greater impact?
My personal answer to ‘what is scale?’ is, ultimately, ‘greater impact’. However, the path to get there likely involves more outcomes-based transactions, which will help to deepen the evidence base and test a range of procurement, measurement and contract-management approaches. And the path to more outcomes-based contracts likely involves more SIBs, as many service providers will continue to need at-risk capital to be able to play.
But I’m not sure that the path necessarily involves larger SIBs, which may disappoint some investors. There is a common misconception about the size of transactions, being that the capital raised is the size of the SIB. The more important number is really the quantum of the expected outcomes payments, which can be materially larger. Under the Newpin Social Benefit Bond, for example, the total expected government payments were around $50 million over seven years, while the investor capital required was only $7 million.
There are several factors at play that dampen the potential for large (capital) SIBs:
- Larger-scale, longer-term programs start to self-fund once the outcome payments commence (provided that things are going well – and if they aren’t then there will likely be an early termination). Only the first year or two of program costs that aren’t covered by the government under a fixed payment (or standing charge) may need to be funded.
- There are often limits on program scale imposed by the number of individuals in the target population, particularly at the ‘pointy end’ of individuals with very complex issues (those of most interest to governments as they are high-cost). Operationally, there are also practical constraints on how large a program can be established from a standing start.
- SIBs tend to occupy the narrow space between the untested innovation (where pilot programs may be philanthropically funded with an eye on scale-up potential) and the tried-and-true (where outcomes can be predicted with some confidence, and governments/service providers are more confident in carrying performance risk). In this in-between space, governments may be prepared to commit tens of millions of dollars to a particular program, but hundreds of millions of dollars is perhaps an uncomfortable stretch. In the ‘tried-and-true’ space, outcomes-based contracts could be mainstreamed with a smaller proportion of payments linked to performance – in which case there may well be no need for investor capital at all. As experience is built over time, providers would be able to predict their performance with a reasonable degree of confidence, and so would not need to ‘insure’ against poor outcomes.
5. Fresh air and sunshine
One of the benefits of having investors involved in a program is that accountability to third parties brings valuable information into the public domain. With SIBs, transparency is built into the process – or at least, it can be built into the process. It is understandable that bad news is not generally shouted from the rooftops, but in the area of program performance sometimes even good news can be hard to get your hands on. Service providers and governments alike – in fact I suspect most human beings – take a risk-averse approach to ‘telling it like it is’.
I am a firm believer in the benefits of fresh air and sunshine, and SVA has endeavoured to set high standards for transparency in reporting, reflecting our broader organisational commitment to building the evidence base and sharing insights.
Transparency isn’t always comfortable, but it is powerful.
6. Bringing things together
The role of intermediaries such as SVA in the development of SIBs is perhaps not widely understood.
I cannot comment on other projects, but the role we have played to date has been very broad, including: facilitation of program design (even program naming!); financial modelling and structuring; outcome metric selection and data analysis; operational design; proposal writing; negotiation; therapy; drafting legal documents; managing outcome certification and program evaluation; managing the money flows, including payments to service providers; and ongoing performance review and project governance. And of course, raising capital from investors and managing ongoing investor reporting and payments.
The financing and investor-related bits of that list have been a relatively small component of our work. We very much see ourselves as an advisor and intermediary in a broad sense, bringing things together with the objective of creating a transaction that is fair for all parties.
Over time, however, the list of things an intermediary does should get shorter – if my other ‘prayers’ are answered. With clearer parameters from government and a shorter, simpler development process, service providers will need less support, and reliance on intermediaries will diminish.
7. Start with the end in mind
Each of the SIB-funded programs we are involved with enable the provision of long-term intensive support to hundreds of vulnerable individuals. Each has the potential to change people’s lives, and the personal stories that shine amidst the data frequently bring a tear to my eye and a warm feeling to my heart.
But that isn’t enough.
Each of those bonds has also been resource-intensive (for all parties), with transaction overheads that are disproportionate to the benefits being generated, for the reasons outlined above. And, as a general statement, it is also unclear what impact they will have on the bigger picture – beyond the scope of the particular SIB – if any. In the early years of the SIB evolution, governments have tended to announce ‘pilots’, but the pilot objectives have been a little obscure.
As we move to the next phase, I think it becomes more important to start with the end in mind – to have an unambiguous view of where next. There are a range of reasons that a SIB (or an outcomes-based contract) may be developed, and I think the key ones are as follows:
- Getting ready to roll: a particular program is delivered under a rigorous measurement and evaluation regime with a view to rolling it out more broadly if it works. The scale-up phase may involve outcomes-based payments, or simplified payments based upon the level of performance delivered under the SIB. It may involve extending a contract with the current service provider, or replicating the program with delivery by multiple providers.
- Trialling lots of things: a number of programs addressing a common problem are delivered under a common measurement framework, with a view to deepening the evidence base around what works, perhaps in different geographies, or with different population sub-groups, and explicitly using that to inform future policy and investments.
- Creating capability: the process of developing a SIB is used to create and test capability that can be applied more broadly. For example, establishing data linkage protocols and processes, or an outcome measurement framework, or new procurement and contract management processes.
The main point is that, really, no one should ever be asking ‘what happens now?’ at the conclusion of a program.
Wrapping it up
At the beginning of this article I posed the question, ‘is it worth it?’ Building on my seven ‘thoughts and prayers’, here is my vision of a future that I think is certainly worth it:
- Both within governments and more broadly across the social sector, data and evidence is used to shape priorities.
- Explicit measurement of outcomes relative to a baseline becomes normal – and straightforward.
- We undertake projects with a view to how they will shape the future.
- We learn from our collective successes and failures.
While that vision does not explicitly mention SIBs, or even outcomes-based contracts, I believe they have already played, and can continue to play, an important role as a catalyst for systemic change.
I look forward to the next seven years.
For over seven years Elyse Sainty has been ‘in the trenches’ leading SVA’s social impact bond (SIB) and outcomes-based contracting work. She has helped service providers develop more than a dozen proposals, been at the table during eight contract negotiations, managed five active SIBs, worked with eight different line agencies across four state governments, and secured SIB capital from 160 investors. In this two-part article she shares her reflections and insights.
This article was originally published in the SVA Quarterly and was republished with the author’s permission.
Source link Google News