The Need to Support Verifiable Claims in AI

The current mechanisms implemented to support Artificial Intelligence (‘AI’) are insufficient to ensure responsible and trustworthy AI. Such that safeguards are falling behind the rate of development of AI systems themselves. There have been steps made by researchers, institutions and academics to develop this area. One such step was a report titled, ‘Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable AI’. The motivation of this paper is to ensure that AI is beneficial, not detrimental, to humankind. To begin the first of 8 posts regarding this paper, this post outlines why it is important to verify claims about AI.

Verifiable claims are identified in the report as ‘falsifiable statements for which evidence and arguments can be brought to bear on the likelihood of those claims being true’. An example of where a verifiable claim of AI would be required is when a system distinguishes between skin colour or is fuelled by bias.

Factors, such as bias based on gender, ethnicity, and sexual orientation, are a concern of institutions which are interested in the potential of AI. This was the case with one of the GAFAM companies. Between 2014 and 2017, the company’s human recruitment team utilised AI-enabled recruitment software. This technology assisted the reviewing of resumes and provided recommendations to human employees. Unfortunately, the software was ultimately found to be more favourable towards male applicants because it was trained on previous resumes which were predominately from male candidates. The software was consequently abandoned, but this is an important real-world example of AI enabled software producing results that are biased upon gender.

This case displays why it is pertinent to ensure mechanisms to verify AI claims prior to their implementation. To prevent this occurring, mechanisms such as third-party auditing to ensure accountability, audit trials to identify bias prior to implementation, and collaboration between institutions implementing similar software can assist with verifying AI claims.

An additional situation whereby AI has failed involves facial recognition. The technology was designed to address a jaywalking issue; however, it identified a man’s face on a bus advertisement rather than a true jay walker. This sparked significant public outrage which undermined the legitimacy of the technology and police enforcement. In this case, the technology had not had significant trials to ensure its identification was based on human like features, such as 3D nature. The technology has since undergone improvement. However, issues like this are the result of utilising a technology which, although advanced, has not been fully verified.

Of course, there is likely to be errors made throughout the process of developing AI technology. However, to prevent as many errors as possible, it is pertinent for organisations to have checks and balances in place. In addition, by implementing mechanisms for supporting verifiable claims of AI, the public and institutions are more likely to trust the software and utilise it. This will assist in public confidence of its potential for good.

The next post will outline the first of a three-tiered approach of supporting mechanisms proposed in the report. These are:

–       Institutional Mechanisms

–       Software Mechanisms

–       Hardware Mechanisms