Lessons from Australia’s ‘Robodebt’ Debacle

AI systems are revolutionising how individuals interact with the government and how governmental services are delivered. The use of such systems can enable new efficiency levels and assist government agencies in dealing with the flood of information in a timely manner. But the deployment of such tools engages a number of rights, can impact ethical principles, and can have social implications on the population when not well managed. The online compliance intervention (OCI), known as ‘Robodebt’, is an excellent example of this.

The AI system known as ‘Robodebt’ used an algorithm to identify inconsistencies between individuals’ declared income to different agencies within the Australian government. Where a discrepancy was identified, an automated notice of debt was generated and sent to the individual. Unfortunately, it was not long until errors and discrepancies began to pile up.

It is estimated that the Australian government is spending an average of $50 million on the failed ‘Robodebt’ operation this year. Ironically, prior to its deployment, the system was set to make a major contribution to the Government’s fraud and non-compliance savings policy. So, what went wrong?

There’s no doubt that an efficient administration using modern technology and algorithms can process welfare claims more efficiently. Around AU$4.5 million in debts were able to be processed each day, in comparison to AU$295,000 per day prior to the ‘Robodebt’ system being introduced. But along with the deployment of such efficient tools, comes responsibility in ensuring that it will do what is set to do.

One of the characteristics of ‘Robodebt’ was that the process shifts the onus of proof: it suddenly became the responsibility of clients to prove their innocence. That is not only unusual, but it raises questions as to the legality of such a system.

The Senate Standing Committee on Community Affairs published its recommendation on the system:

The committee is concerned about the shift in the onus from the department to the individual recipient to verify whether or not a purported debt exists. The committee is particularly concerned that individuals do not have access to the same resources and coercive powers as the department to access historical employment income information

Senate Standing Committee on Community Affairs, Design, scope,
cost-benefit analysis, contracts awarded and implementation associated
with the Better Management of the Social Welfare System initiative
(21 June 2017), 107.

Reflecting on these findings demonstrates that the system was not only impacting on existing laws, but that core principles for AI such as ‘Transparency & Explainability’ which are currently in discussion in Australia would have been breached. The affected parties not only did not know that an algorithm was being used to make decisions that impacted them, but they also did not know what information or data have been used by the algorithm in reaching these conclusions.

Gordon Legal pursued a class-action lawsuit over the unlawful scheme. This lawsuit concluded with a settlement, with the Commonwealth agreeing to pay $112 million in compensation to 400,000 eligible individual applicants. In addition to this, the government has agreed to repay more than $751 million in debts collected invalidly and will continue to provide refunds to welfare clients for debts they did not owe. This outcome is highly consequential for the government’s use of AI systems moving forward. The Robodebt scheme is one of the leading examples of how much human and reputation damage can be caused by bad design, particularly when administered as part of a national governmental policy.

In spite of the historic and important result of the class-action, it will still be relevant to observe how different responsibilities are distributed by law across the parties involved; should the developers be held responsible for passing such a system lacking robustness or will the government solely be liable for lacking monitoring, control of data and responsiveness? Undoubtedly, government agencies have a responsibility to deploy technology cautiously and to identify and manage the challenges arising throughout the process. But they are not the only ones bearing that responsibility. People involved in the design stage are also responsible for ensuring that the system is robust and that principles of deployment are well understood, such as how data can impact on the systems’ accuracy.

Another of the ethical principles we see again and again showing up in different frameworks around the world involve human-centric design. Keeping humans in the loop is such a crucial part of the process of developing and deploying AI systems. Human checking for errors needs to occur at the initial stage. When not correctly monitored, AI can demonstrate efficiency at jumping to unfounded conclusions based on inadequate data sets and training that can have serious consequences.

Be that as it may, if we are looking towards solid AI principles, an AI system that demonstrates 20% risk of inaccuracy lacks robustness. Scenarios such as the ‘Robodebt’ fiasco could be avoided if we are to invest in a more sensitive process for deployment of AI systems. A process that concentrates on the accessibility, usability, and transparency of the technology, including quality of service delivery and procedural fairness. Many AI system implementation problems can often be mitigated through better planning and risk management at the outset.