Toward Trustworthy AI Development: Mechanisms for Supporting Verificable Claims

As a result of growing public concern regarding the credibility and trustworthiness of Artificial Intelligence (‘AI’), a group of researchers have set out to show how AI can be made more lawful, ethical, and robust. In their 80-page report ‘Toward Trustworthy AI Development: Mechanisms for Supporting Verificable Claims‘ they set out a list of recommendations which can help build public trust towards AI. Without clouding the fact that current AI has flaws, the paper addresses the need to support verifiable claims in AI considering its undoubtable influence on businesses at present and in the future.

Over the next few weeks, the AI Asia Pacific Institute will endeavour to address important points discussed in the report within a series of article posts. Each post will provide insight into the solutions discussed in order to bridge the gap between several key stakeholders – such as businesses, individuals, and governments.

The posts will be structured as follows:

  1. The need to support verifiable claims in AI
  2. Institutional Mechanisms
  3. Breaking Down the Mechanisms of AI – Software Mechanisms
  4. Breaking Down the Mechanisms of AI: Hardware Mechanisms
  5. Machine Learning – How Can Privacy be Protected?
  6. Bridging the Gap between Industry and Academia