Breaking Down the Mechanisms of AI – Hardware Mechanisms

Computing hardware enables the training, testing, and use of AI systems. This hardware ranges from sensors, networks and memory, to most crucially processing power. It is arguably the backbone to all AI. Currently, the mechanisms in place to verify policies relating to the security and efficacy of hardware itself has the potential to be compromised. On account of error or malice, like software mechanisms, hardware mechanisms (‘HM’s) should be deployed to enforce policies in order to ensure trustworthy AI.

The previous article summarised mechanisms which could improve the efficacy and trustworthiness of AI software. This post will address the third mechanism of verifying AI claims: hardware mechanisms. It will thus outline three identified problems in the report, and accompanying solutions.

HM’s’ play a key role in substantiating claims about privacy and security, enabling transparency with organisational resources. They can influence who holds the resources necessary to verify different claims. Overall, the report focuses on secure hardware for machine learning that could increase the verifiability of privacy and security claims – a major issue for AI at its inception. Further, reliable hardware produces high-precision compute measurements, improving the value and comparability of claims regarding computing power usage. Finally, compute support for academics improve the ability of those outside industry to evaluate claims about AI systems.

The existing mechanisms that perform these verification and security functions include formal verification processes, which raises the question of who has access to evidence of verification. Secondly, remote attestation, which provides third-party proof of the authenticity of claims. Finally, cloud computing which, although highly utilised and protected, could be improved. The three recommendations below have been taken directly from the report and are possible solutions to identified problems within the above HM’s.

Secure Hardware for Machine Learning

Problem: Hardware security features can provide strong assurances against theft of data and models, but secure enclaves (also known as Trusted Execution Environments) are only available on commodity (non-specialized) hardware. Machine learning tasks are increasingly executed on specialized hardware accelerators, for which the development of secure enclaves faces significant up-front costs and may not be the most appropriate hardware-based solution.

Cost based issues are significant in the realm of upcoming AI developments. It is understandable that a strong academia and business relationship will be required in order to support the development of hardware security.

Recommendation: Industry and academia should work together to develop hardware security features for AI accelerator or otherwise establish best practices for the use of secure hardware (including secure enclaves on commodity hardware) in machine learning contexts.

High-Precision Compute Measurement

Problem: The absence of standards for measuring the use of computational resources reduces the value of voluntary reporting and makes it harder to verify claims about the resources used in the AI development process.

It is currently uncertain whether the development of new tools may help to simplify voluntary reporting. Until these advances are made, support from AI labs is essential.

Recommendation: One or more AI labs should estimate the computing power involved in a single project in great detail, and report on the potential for wider adoption of such methods.

Compute Support for Academia

Problem: The gap in compute resources between industry and academia limits the ability of those outside of industry to scrutinize technical claims made by AI developers, particularly those related to compute-intensive systems.

It is clear that financial strains are a significant barrier to AI developments. This is not a standstill issue, but greater support of government and academia is necessary.

Recommendation: Government funding bodies should substantially increase funding of computing power resources for researchers in academia, in order to improve the ability of those re- searchers to verify claims made by industry.

As discussed, this section of the report focuses on how mechanisms pertaining to hardware can be employed to improve the credibility of AI. The next post will discuss in more detail, machine learning and privacy.

Posted in AI