Last week, the Australian Human Rights Commission (AHRC) published a technical paper addressing the problem of algorithmic bias.
In producing the paper, AHRC collaborated with the Gradient Institute, the Consumer Policy Research Centre, CHOICE and CSIRO’s Data61. By considering multiple scenarios, the paper is successful in highlighting how algorithmic bias can affect real-world societal equality. Moreover, the research demonstrates that harm arising from algorithmic bias may incur legal liability.
So what is algorithmic bias? The paper’s authors define the term as:
“predictions or outputs from an AI system, where those predictions or outputs exhibit erroneous or unjustified differential treatment between two groups.”
Algorithmic bias has the potential to cause significant harm where it discriminates based on age, disability, race, sex, or sexual orientation. In Australia, these constitute protected attributes for which discrimination may violate federal law. If not to serve egalitarian goals, there is an incentive to minimise algorithmic bias to avoid incurring legal liability.
Primarily, the paper views how algorithmic bias engages human rights in the consumer context. Particularly within the financial services, telecommunications, energy and human resource sectors, AI systems and predictive modelling are used to assist the decision-making process. To this end, the research simulates that the risk of algorithmic bias applies to almost any commercial context where AI systems contribute to decision-making.
The overall message of the technical paper is that remedies are available for errors leading to discrimination. Indeed, with the right fine-tuning, errors in AI systems can be corrected or minimised. In this regard, algorithmic bias arises from the interaction of three factors: existing inequalities, inaccuracies or deficiencies in data, and defects in the AI system.
Perhaps most importantly, the paper sets out a “Toolkit” offering five ways to mitigate algorithmic bias. These are:
- Acquire more appropriate data. There is a responsibility on AI system creators to obtain additional data relating to individuals inaccurately represented or under-represented within the dataset.
- Pre-processing data. This tool allows developers to remove some information before the dataset is used in training. For example, a developer might mute an information attribute such as gender or age.
- Increase model complexity. What developers may gain in terms of manageability they may lose in accuracy. Oversimplified models may lead to algorithms making generalisations in favour of a majority group. It may be beneficial to increase the complexity of models to account for a broader range of factors.
- Modify the AI system. This change involves accounting for errors within the algorithm. An example of an effective modification is to apply positive bias within the algorithm to allow it to self-correct detrimental effects.
- Change the target. It may be relevant for developers to identify new quantitative metrics against which to make assessments if errors persist.
The paper concludes that far-reaching consideration must be given to the accountability of AI systems to ensure reliable consumer outcomes. Further, businesses must ensure they comply with existing anti-discrimination laws and practice the responsible use of AI and data.
You can access the full article here through the Australian Human Rights Commission.