Lessons from Australia’s ‘Robodebt’ Debacle

Lessons from Australia’s ‘Robodebt’ Debacle

AI systems are revolutionising how individuals interact with the government and how governmental services are delivered. The use of such systems can enable new efficiency levels and assist government agencies in dealing with the flood of information in a timely manner. But the deployment of such tools engages a number of rights, impacts ethical principles and can have social consequences on the population when not well managed. The online compliance intervention (OCI), known as ‘Robodebt’, is an excellent example of this.

The AI system known as ‘Robodebt’ used an algorithm to identify inconsistencies between individuals’ declared income to different agencies within the Australian government. Where a discrepancy was identified, an automated notice of debt was generated and sent to the individual. Unfortunately, it wasn’t long until errors and discrepancies started to arise.

A few months ago the Commonwealth recognised that many debts raised under the system were unlawfully levied, affecting hundreds of thousands of people. Refunds, which range on the mark of AU$721 million debts will, therefore, be made to most, but not all affected people.

It is estimated that the Australian government is spending an average of $50 million on the ‘Robodebt’ failed operation this year. Ironically, prior to the system’s deployment, the Minister for Human Services, Alan Tudge, said the new tool was set to make a major contribution to the Government’s fraud and non-compliance savings goals. So what went wrong?

There’s no doubt that an efficient administration using modern technology and algorithms can process welfare claims more efficiently. Around AU$4.5 million in debts were able to be processed each day, in comparison to AU$295,000 per day prior to the ‘Robodebt’ system being introduced. But along with the deployment of such efficient tools, comes responsibility in ensuring that it will do what is set to do.

One of the characteristics of ‘Robodebt’ was that the process shifts the onus of proof: it suddenly became the responsibility of clients to prove their innocence. That is not only unusual, but it raises questions as to the legality of such a system.

The Senate Standing Committee on Community Affairs published its recommendation on the system:

The committee is concerned about the shift in the onus from the department to the individual recipient to verify whether or not a purported debt exists. The committee is particularly concerned that individuals do not have access to the same resources and coercive powers as the department to access historical employment income information

Senate Standing Committee on Community Affairs, Design, scope,
cost-benefit analysis, contracts awarded and implementation associated
with the Better Management of the Social Welfare System initiative
(21 June 2017), 107.

Reflecting on these findings demonstrates that the system was not only impacting on existing laws, but that core principles for AI such as ‘Transparency & Explainability’ which are currently in discussion in Australia would have been breached. The affected parties not only did not know that an algorithm was being used to make decisions that impacted them, but they also did not know what information or data have been used by the algorithm in reaching these conclusions.

While Gordon Legal is pursuing a class-action lawsuit over the unlawful scheme, it will be relevant to observe how different responsibilities are distributed by law across the parties involved; should the developers be held responsible for passing such a system lacking robustness or will the government solely be liable for lacking monitoring, control of data and responsiveness? Undoubtedly, government agencies have a responsibility to deploy technology cautiously and to identify and manage the challenges arising throughout the process. But they are not the only ones bearing that responsibility. People involved in the design stage are also responsible for ensuring that the system is robust and that principles of deployment are well understood, such as how data can impact on the systems’ accuracy.

Another of the ethical principles we see again and again showing up in different frameworks around the world involve human-centric design. Keeping humans in the loop is such a crucial part of the process of developing and deploying AI systems. Human checking for errors needs to occur at the initial stage. When not correctly monitored, AI can demonstrate efficiency at jumping to unfounded conclusions based on inadequate data sets and training that can have serious consequences.

Be that as it may, if we are looking towards solid AI principles, an AI system that demonstrates 20% risk of inaccuracy lacks robustness and should not have been deployed in the first place if it might infringe on the population’s basic rights.

We know predictive AI models and algorithms are increasingly being deployed at the risk of targeting and withholding assistance. Perhaps it doesn’t need to be that way, scenarios such as the ‘Robodebt’ fiasco could be avoided if we are to invest in a more sensitive process for deployment of AI systems. A process that concentrates on the accessibility, usability, and transparency of the technology, including quality of service delivery and procedural fairness. Many AI system implementation problems can often be mitigated through better planning and risk management at the outset.