Breaking Down the Mechanisms of AI – Software Mechanisms

Breaking Down the Mechanisms of AI – Software Mechanisms

The previous article summarised how institutions can employ a ground-up approach to improve the credibility of AI. This post will address the second mechanism of verifying AI claims, proposed as part of the report titled, ‘Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable AI’.

The report highlights how expertise regarding software mechanisms is not extensive. This creates a barrier between AI developer and businesses wanting to deploy AI software. An example of this barrier of trust is where an AI developer wants to substantiate a claim that data is kept privately and confidentially. In this respect, trust can be fostered through a formal evidentiary framework such as differential privacy. ‘Differential privacy’ is a system for sharing information about a certain dataset by identifying the groups within it, whilst retaining the confidentiality of information about users in the dataset.

Software mechanisms (‘SM’s) ensure that the functions of an AI system can be reviewed and understood, and can assist in fostering trust and protecting privacy. The capacity of SM’s is significant. They can substantiate claims about the development of AI, which further support institutional mechanisms (discussed in previous article) and hardware mechanisms (to be discussed). SM’s entail a process of determining and revealing the functions of existing AI systems.

The three recommendations below have been taken directly from the report and are possible solutions to identified problems within the above SM’s.

Audit Trails

Problem: AI systems lack traceable logs of steps taken in problem-definition, design, development, and operation, leading to a lack of accountability for subsequent claims about those systems’ properties and impacts.

A current barrier to implementation of audit trails, is that they are not yet a mature mechanism of AI yet. Thus it will take time and engineering developments for these to turn from mostly theory to reality. Such that the onus is on governments and large bodies to set a standard of making safety-critical AI systems full auditable.

Recommendation: Standards setting bodies should work with academia and industry to develop audit trail requirements for safety-critical applications of AI systems.

Interpretability

Problem – It’s difficult to verify claims about “black-box” AI systems that make predictions without explanations or visibility into their inner workings. This problem is compounded by a lack of consensus on what interpretability means.

Research supporting the interpretability of AI systems is pertinent to the success of trustworthy AI. A system needs to be completely understood in order to project how it will react, even though the ideal machine learning tool will have its own decision-making function.

More importantly, where human rights or welfare can be harmed, the report suggests that anticipating interpretability will be key to AI system audits. It believed that certain applications of AI will be judged on the success of providing sufficient intuition to auditors about the model behaviour.

Recommendation: Organizations developing AI and funding bodies should support research into the interpretability of AI systems, with a focus on supporting risk assessment and auditing.

Privacy Preserving Machine Learning

Machine learning will be addressed more thoroughly in a future article.

Problem: A range of methods can potentially be used to verifiably safeguard the data and models involved in AI development. However, standards are lacking for evaluating new privacy-preserving ma- chine learning techniques, and the ability to implement them currently lies outside a typical AI developer’s skill set.

The AI development community, and other relevant communities, have developed a range of methods and mechanisms to address these concerns, under the general heading of “privacy-preserving machine learning” (‘PPML’).

For individuals to trust claims about a Machine Learning system enough to participate in its development, it is understandable that said individuals would need supporting evidence regarding their privacy and data access. A range of methods have recently been developed known as ‘privacy-preserving machine learning’. These will support in the aim of ensuring trustworthy AI and privacy secured software.

Recommendation: AI developers should develop, share, and use suites of tools for privacy- preserving machine learning that include measures of performance against common standards.