Toward Trustworthy AI Development: Mechanisms for Supporting Verificable Claims

Toward Trustworthy AI Development: Mechanisms for Supporting Verificable Claims

As a result of growing public concern regarding the credibility and trustworthiness of Artificial Intelligence (‘AI’), a group of researchers have set out to show how AI can be made more lawful, ethical, and robust. In their 80-page report ‘Toward Trustworthy AI Development: Mechanisms for Supporting Verificable Claims‘ they set out a list of recommendations which will help build public trust towards AI. Without clouding the fact that current AI has flaws, the paper addresses the need to support verifiable claims in AI considering its undoubtable influence on businesses at present and in the future.

Over the next 8 weeks, the AI Asia Pacific Institute will endeavour to address important points discussed in the report within a series of article posts. Each post will provide insight into the solutions discussed in order to bridge the gap between several key stakeholders – such as businesses, individuals, and governments.

The posts will be structured as follows:

a. The need to support verifiable claims in AI
b. Breaking down the mechanisms of AI
c. Institutional incident sharing – fostering communication between organisations for greater understanding of AI development.
d. Machine Learning – how can privacy be protected?
e. Bridging the gap between Industry and Academia
f. Transparency in AI
g. AI and Individual
h. Where to from here?
i. Concluding Remarks

Each post will cover the main ideas from the report with support evidence from academics and real-world examples. It is hoped that this series will help share information regarding the development of AI, whilst culminating the research done by the AI Asia Pacific Institute team.