Transforming Ethics in AI through Investment

While ethics in AI is having its moment in the spotlight, we are still falling behind in terms of injecting practice into these theoretical ideals. Ethical frameworks can set a strong foundation in ethics, but they are insufficient in motivating a change in the industry. More impactfully, a change in the investment model can play a constructive role in advancing ethical AI. Investors can feasibly bridge the gap between the academy and the marketplace, enabling a shift from theoretical ethical ideals into practice.

Increasingly standards of expected AI ethical governance are rising amongst investors. The prospect of upcoming regulations and the significant business risks attached to AI are raising the bar for investments in the industry. The argument that ethics builds trust and that trust, in the long run, is financially rewarding is not necessarily new. We are now reaching the moment where this is finally starting to make sense. Recent research shows that ethics is 3x more important than competence. Ethical AI is now seen as one of the important drivers of portfolio risk and return.

We don’t need to look further to see various examples of organisations that have been negatively impacted by neglecting these values. The recent paper published by Hermes, Investors’ Expectations on Responsible Artificial Intelligence and Data Governance, point to a few of these examples. Most of us are familiar with the case involving Cambridge Analytica where Facebook was fined €10 million. No longer after that incident, Google was fined €50 million for failing to provide users with transparent and understandable information on its data use policies, and processing information for personalised advertisements accessible by third-party businesses. While technology companies have a history of falling short of trustworthy standards, the public is no longer so forgiving. Such breaches can now cause irreparable reputational damage, one that it’s almost impossible to be rebuild. Research found that following the Cambridge Analytica data-sharing scandal, more than 1 in 4 Americans deleted Facebook.

Understanding this, in recent public filings to the United States Securities and Exchange Commission (SEC), Microsoft Corporation (2020) alerted investors to risks from its growing artificial intelligence business.

If purely reputational and financial concerns are not sufficient to drive ethical AI, as researcher Trooper Sanders puts it:

strong ethical AI performance can be an indicator for a strong and well-managed enterprise generally, and weak performance a warning sign for more fundamental challenges that could hurt shareholders.

In the above-mentioned paper, Hermes explains that any inherent biases from AI can be identified in one of three areas – input data bias, process bias, and outcomes bias – due to the nature of how data is applied in specific contexts. The paper goes further to propose a structured approach for investors to engage on AI and data governance based on six core principles: trust, transparency, action, integrity, accountability and safety.

James Brusseau in his recent research paper also argues for a model for ethical investing in AI-intensive companies. This model would be intellectually robust, manageable for analytics, useful for portfolio managers, and credible for investors.

Whether for investors promoting social change or executives seeking to limit reputational and financial risk, this evaluation of AI systems is now critical. Understanding AI applications is a vital step in managing stakeholder expectations and in applying practice while integrating AI. To correctly conduct such assessments, investors will be required to develop deeper layers of inquiry on risk and implications of such AI systems.