NEW REPORT OUT NOW
Since the 1970s, the Australian Government has acknowledged the critical need for policy evaluation through a succession of reviews and legislative changes. Unfortunately, evaluation in the Australian Public Service remains poor. As shown by CEDA’s recent report Disrupting Disadvantage 3: Finding what works, once new policies are adopted, there is often little or ineffective follow-up. This includes a lack of resourcing and proper evaluation guidelines for organisations contracted to deliver the policies and programs. This is particularly the case in the disadvantage space, writes CEDA Graduate Economist Sebastian Tofts-Len.
Since the 1970s, the Australian Government has acknowledged the critical need for policy evaluation through a succession of reviews and legislative changes.
Unfortunately, evaluation in the Australian Public Service remains poor. As shown by CEDA’s recent report Disrupting Disadvantage 3: Finding what works, once new policies are adopted, there is often little or ineffective follow-up. This includes a lack of resourcing and proper evaluation guidelines for organisations contracted to deliver the policies and programs. This is particularly the case in the disadvantage space.
This sentiment has been acknowledged over the years by former senior public servants, the Productivity Commission and state and federal auditor-generals.
And for those evaluations that are commissioned and released publicly, the quality is often poor, leaving us just as uninformed – or even worse, misinformed.
It is not uncommon to find that data limitations constrain evaluation quality. Objectives and outcomes that are not clearly defined limit the usefulness of findings. And the methodology of most evaluations is weak.
Enter randomised controlled trials (RCTs), also known as field experiments.
To understand why RCTS can be so valuable, we must be very clear about what the purpose of an evaluation is – in particular, an outcome evaluation.
An effective outcome evaluation for any policy intervention will tell us what would have happened in the absence of that intervention.
RCTs randomly assign participants to two groups: the treatment group, who will be exposed to the policy, and a control group, who will be left without the policy.
By randomly assigning enough people to the two groups, we can be sure that the characteristics of both groups are identical in a statistical sense. This means that any differences in outcomes must be due to the policy intervention and not another factor.
The key here is ensuring the causal connection is clear, as opposed to correlation. RCTs isolate the effect of the policy intervention itself.
RCTs are viewed as the gold standard for policy evaluation but are rarely used in Australia. In 2009, current Assistant Minister for Competition, Charities and Treasury Andrew Leigh – back when he was a professor of economics at ANU – found that 0.5 per cent of government evaluations used a randomised design. I updated this figure recently and found it was 1.5 per cent. In other words, the government still barely utilises RCTS in policy evaluation.
The 2002 RCT of the Drug Court of NSW found the court had positive effects and subsequently informed policies expanding drug courts across other states. In 2018, the School Enrolment and Attendance Measure RCT supported scrapping the policy due to no evidence it was increasing student attendance.
These two trials are often cited as canonical examples of successful experimental evaluation in Australian public policy. The fact that these examples remain among the only ones cited to this day is a testament to how rare RCTs still are in Australia.
The United States makes far greater use of RCTs, which have had great success in informing policy design. The Perry Preschool program is one notable example.
We could learn a lot from one of the pioneering organisations in this space. The global Abdul Latif Jameel Poverty Action Lab (J-Pal), co-founded by economics Nobel Laureates Abhijit Banerjee, Esther Duflo and Michael Kremer, conducts randomised evaluations on the impacts of policy interventions aimed at reducing poverty. As of March 2023, J-Pal maintains a database of more than 1000 randomised evaluations across 95 countries.
Of course, RCTs are not a panacea. They are not applicable to all policy interventions, such as monetary policy – you cannot design an RCT for changing interest rates. Some also point out they are politically difficult and not worth pursuing, because the control group in any trial is denied a potentially beneficial policy intervention.
However, RCTs would be extremely useful in tackling selection bias and other statistical issues that have plagued previous government evaluations.
The alternative is for policymakers to remain in the dark on whether a program works. And if it is ineffective, valuable taxpayer dollars have been squandered. The quicker we subject our public policies to rigorous experimental evaluation, the quicker we can build a robust and reliable evidence base.
The controversial cashless debit card trial was a perfect opportunity to conduct a randomised controlled trial.
Opponents of the income management scheme cheered when the Albanese Government abolished it, while proponents were quick to suggest its abolition was to blame for the recent rise in drug and alcohol related violence at some of the trial sites.
But neither side can credibly drive its case because we simply do not know whether the trial achieved its intended benefits of reducing drug and alcohol related violence.
The Morrison Government heavily relied on the results of an evaluation to support the card’s expansion. But the Australian National Audit Office found the trial had not been properly evaluated, finding major problems that rendered its results unreliable and inconclusive. Baseline data had not been collected before the trial had commenced and it lacked a rigorous methodology.
If the trial had been properly evaluated, we could now know whether it had actually worked. Instead, debate continues about its effectiveness.
It is pleasing to see the Albanese Government has committed to better evaluation by introducing an evaluation unit in this Federal Budget. A 2016 survey of more than 100 state and federal parliamentarians found more than two-thirds supported randomised trials to inform policy. Let’s hope this finally translates into meaningful action.
Dr Leigh should be commended for continuing to shed light on the lack of RCTs in policy evaluation. As he has explained in the past, when a new pharmaceutical drug needs approval, it must be shown to have worked in a randomised trial.
But when our political leaders approve new policies that affect the lives of many, they do not need any rigorous evidence. Best guesses and ideology often trump results-oriented policy.
The sad result is that many of our public policies are likely causing more harm than good, some informed by bad evaluations.
As taxpayers, we deserve better.
A systems approach requires us to have a wider lens on the interconnectedness of our systems and act accordingly. For the clean-energy transition, at a minimum, that means recognising biodiversity is just as important to our climate as reducing emissions, and reducing emissions won’t be achieved without addressing inequity in our current systems, writes Professor Ingrid Burkett.
Read more Opinion article December 3, 2020CEDA Senior Economist Cassandra Winzar says that Western Australia’s economy has escaped the worst impacts of COVID-19 and the state should now focus on building the economic foundations that will see it thrive into the future.
As populism and extremism become the new norm, Dr Anika Gauja discusses lessons our leaders can learn from this global shift, and how to utilise this knowledge to forward political and economic agendas.
Read more