Why Trustworthy AI

Trustworthy AI places trust at the center of AI developments. Many different terms have been suggested in the industry, from ethical AI to responsible AI. It is worth spending some time defining what we consider to be Trustworthy AI and why we choose to work with this definition.

We believe that in the same way AI is powerfully driving the fourth industrial revolution, trust is the core foundation driving innovation. It has been said that Trust is the currency of the future. If that is the case, then this key word will be the cornerstone of the economy and a determining factor for organisations expecting to remain relevant in the years to come.

For us to enable trust in AI systems, they must demonstrate more than mere ethicality. Therefore we follow the approach put forward by the European Commission. That is:

“Trustworthy AI has three components, which should be met throughout the system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.”  

Our proposed model is based on the above proposition that an AI system is deemed to be trustworthy if it is lawful, ethical and robust.

To expand on what each of these components represent, we need to be continually asking ourselves the following questions.

  1. Does the AI system respects existing local and international laws? Such laws might encompass data-sharing legislation, Privacy Act (if available); or/and any international agreement the country might be a signatory.
  2. Does the AI system respects existing ethical principles?
  3. Can the system demonstrate robustness? Although some frameworks treat this as another principle, we look at this criteria as a separate requirement. That is, throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose. As an example, a self-driving car that can only operate in smooth weather conditions does not have sufficient robustness for deployment.

Each component in itself is necessary for the achievement of Trustworthy AI, but they are not self-sufficient. This is an evolving field and so, organisations must continue to mature their processes in accordance with industry developments.

It is worth spending some time highlighting why organisations need to prioritise this process and adopt a mindset towards the development of Trustworthy AI. We will call this the Trustworthy Advantage and the following are some relevant considerations:

Economic and Social Benefits

What can we gain economically by being more ethical? The Ethics Centre recently commissioned Deloitte Access Economics to develop a framework to quantify the benefits of a more ethical society. The result is the published report The Ethical Advantage: The Economic and Social Benefits of Ethics to Australia, the first to quantify the benefits of ethics.

Although the study focuses in Australia, a chart plots the average GDP per capita (log) against the average trust for a sample of 59 countries between 2010 and 2016. Countries with higher levels of trust also tended to have higher income levels. Approximately one-fifth of the cross-country variation in GDP per capita is related to the differences in trust levels.

Many other recent studies point to similar results: more trust, more economical growth and social benefits. The 2020 Edelman Trust Barometer Special Report: reveals that brand trust (53 percent) is the second most important purchasing factor for brands across most geographies, age groups, gender and income levels, trailing only price (64 percent).  Brand trust was found to be more imperative than all other factors, such as reputation or performance.

Investment Considerations  

Increasingly standards of expected AI governance are rising amongst investors. The prospect of upcoming regulations and the significant business risks attached to AI are raising the bar for investments in the industry. So what specifically are investors looking for when choosing AI companies to invest in?

Janet Wong, CFA from the EOS at Federated Hermes Asia and global emerging markets stewardship team recently joined us to share more on these expectations. At a minimum, investors are expecting to see that the following elements can be demonstrated:

  1. evidence of AI governance and oversight within the company, including clear responsibility on the board level to oversee AI related issues;
  2. evidence of public commitment to trust the AI; and
  3. evidence of how the company is operationalising these ethical principles.

Read more here or listen to our full conversation with Janet Wong.

Compliance and Regulatory Considerations

The industry is constantly evolving and many recent developments point to an increasing intention to shape these AI developments, raising the bar and building trust. What this will look like in the near future is a matter of debate. What we do know is that we can continue to expect to see more initiatives as the industry continues to draw the attention of policy makers, legal professionals, and politicians.

This might take shape in the form of regulation or specific sector legislation advancements. The latter was the approach Singapore followed with the creation of Veritas, created for financial institutions to promote the responsible adoption of Artificial Intelligence and Data Analytics (‘AIDA’).

Europe, which has often been at the forefront of these developments, had its Parliament members recently voted in favor of proposals designed to regulate AI in terms of ethics, liability and intellectual property rights. If integrated into the European Commission’s legislative proposal – expected early next year – these recommendations will make the EU one of the first regions to create a structured framework on these more complicated areas of AI governance.

Independently of how regulatory developments unfold in the next few years, organisations that can place trust as their key priority will unleash new potential. Due to the business risk attached to trust in AI, understanding AI applications is a vital step in managing stakeholder expectations and in applying practice during its integration.

These efforts towards trust must be collaborative. For this reason, our work approaches AI in a multidisciplinary way. We can create the future we want – if we all work together.

If your organisation needs support while navigating its way towards Trustworthy AI, we are here to help.