Can Business Use of AI be Ethical?

Businesses prosper if they achieve two things: profitability and persistence. Profitability requires a successful business model and an organization that functions. Persistence requires a positive perception of that business in their chosen market.

For consumer oriented businesses, this means their product or service and branding must be engaging, and remain relevant by aligning with consumer trends and beliefs. In a digital age, this means being and being seen to be ethical, responsible, responsive and, increasingly, sustainable.

The focus of this article is on how Artificial Intelligence (AI) in general and Machine Learning (ML) in particular may help businesses to persist, and best do so by ensuring their use of AI is ethical. That is, it follows principles that are generally accepted in our society as beneficial or ‘good’.

How Important Is AI?

We all know our future will be determined by how we interact with AI and, in varying degrees, that our lives will be driven or controlled by AI. This has been envisioned in science fiction for years, and is fast becoming fact.

The European Commission (EC) “Co-ordinated Plan on the Development and Use of AI Made in Europe – 2018” reports that

“AI will be the main driver of economic and productivity growth and will contribute to the sustainability and viability of the industrial base in Europe,” 

going on to compare it to the advent of the steam engine and electricity.

What are AI and ML?

There are no agreed definitions, but in general terms:

ML uses training algorithms and historic data to predict the probable outcome of a situation. The training algorithms are in most respects statistical analysis techniques of varying degrees of sophistication and complexity.

A practical example is facial recognition: showing a ML system many faces, it will be able to recognize and select a particular individual from among others.

AI uses ML plus some knowledge of context or rules (machine reasoning) and other inputs (robotics, etc) to make reasoned decisions. That is, it applies a layer of intelligence to analysis of the data.

In the example above: showing an AI system various pictures, it will be able to determine that certain images are of human faces.

How would we know an AI system is ethical?

The easiest way to recognize them is if their goal is related to or promotes human prosperity, health and well-being.  For example, the answers to questions such as these would provide an indicator of good or bad intent:

  1. Do your AI systems conform to published standards?
  2. Do your AI systems conform to regulatory guidelines?
  3. Are they visible and are details of how they work published?
  4. Are they robust and safe?
  5. Can I see my data, or opt-out of your algorithms?
  6. Who else can see them?
  7. Who views the output and to what use is it put?

Is AI ethics the subject of study?

There have been and are many contemporary studies on AI ethics. These usually result in publication of guidelines or voluntary standards. The EC’s “Draft Ethics Guidelines for Trustworthy AI” is typical.

Such documents represent the beginnings of ethical standards and regulatory guidelines. Comprehensive but theoretical rather than offering practical guidance, it at least recognizes the need for ‘domain-specific’ guidelines.

What do we know so far?

The answer to this is: not much. The AI guidelines produced so far typically define the following:

PURPOSE

AI systems should only be beneficial to humans, and should reflect our multi-cultural values.

PRINCIPLES

Very broad principles, such as ‘doing good’ or ‘doing no harm’, which are intended to be indicative. That is, voluntary rather than binding.

ASSESSMENT

Why and to what degree are accountability and governance important, and how should they be implemented.

For example the EC document proposes that all AI systems should do good (Beneficence), do no harm (Non-maleficence), preserve human agency (Autonomy), be fair (Justice), and operate transparently (Explicability). The principles underpinning these are then defined as:

  • Accountability: for any discrimination or mistakes made
  • Data Governance: removal of bias in training sets
  • Design for all: accessibility of benefits
  • Human oversight: human review
  • Non-Discrimination: removal of bias
  • Respect for Human Autonomy: avoiding abuse by government, business
  • Respect for Privacy: restrict access to data by operators
  • Robustness: auditable, repeatable, resilient, fall-back
  • Safety: no human or environmental harm
  • Transparency: able to audit algorithms

The above are theoretical rather than practical. Their practical guidance is limited to definitions of the following:

TECHNICAL METHODS to ENSURE COMPLIANCEPrivacy by design and Security by designProcedural constraints (ie defined boundaries)Testing and validating all inputsTraceability and explicability

NON-TECHNICAL METHODS to ENSURE COMPLIANCE

RegulationStandardizationGovernanceCodes of conductEducation and awarenessStakeholder and social dialogueDiversity of team members

Again, these are very general. It is probably best to see this document and its equivalents elsewhere as a beginning.

How do we move from theory to practice?

The logical next step is to consider and study each application as it arises. I have selected four examples below in order to show the kind of issues that arise. Debate and definition is required of what is and is not acceptable for each issue, with guidelines being produced and, in my opinion at least, the inevitability of legal reinforcement to ensure they are followed.

Example 1: Recruitment

Application: Selecting candidates based on ML analysis of resumé. System is trained using previous applicants resumés, both successful and unsuccessful, plus employee records that show duration of employment, promotion and disciplinary record, etc.Issue: System reproduces existing biases – gender, race, class, sexuality.Resolution: Need for program/algorithm to correct bias. Fortunately, resolution exists and is implementable but will probably require legal reinforcement.

Example 2: Customer Churn

Application: Using account behavior to predict who is most likely to cancel a subscription, thus improving subscriber retention.Issue: Privacy –  who has access to data, and is there a right to delete or opt-out.Resolution:  Likely needs legal sanction.

Example 3: Autonomous vehicles

Application: Self-driving cars, lorries, etc.Issue: Who is at fault if the software causes harm, e.g. a death?Resolution: Fully auditable development, testing, operational recordings required. Parallels with aircraft industry abound, witness the recent problems with the Boeing 737 Max. Likely needs legal sanction.

Example 4: Facebook, YouTube

Application: Social networksIssue: Range of issues around privacy, algorithmic bias and manipulation of results.Resolution: Solution is visibility of data and algorithms, right to opt-out and offer an alternative to use-of-data-for-monetization by paying a subscription fee. Based on the reaction of Facebook in the form of increased lobbying expenditure and PR activity rather than recognizing and fixing the problem, this will likely require legal sanction.

Will Businesses Deploy AI Ethically?

Most business leaders I have met are very keen to behave responsibly, and to be seen to do so. Especially those concerned with consumers know that, increasingly, they must be seen to be responding to societal demands. This is reflected in their increasing concern with sustainability, and inclusive practices.

The bad news is that the technology sector itself is an exception, especially the largest technology companies. The indications to date are that technology companies are failing to adapt to a changing socio-political landscape, preferring to innovate and ‘see what happens’.

In addition to legal sanction, research is ongoing into the possibility of ensuring that AI systems may be trained to behave ethically, no matter who owns them. 

Stephen Hill