Measuring Autonomy: Issues in the Legal Personality of AI

Measuring Autonomy: Issues in the Legal Personality of AI

Scholars at the Moscow State Law University published an article considering AI from the viewpoint of law. Written in the native tongue of its Russian authors, with the assistance of AI language tools (such as DocTranslator) researchers and public policy thinkers can draw from global perspectives on AI regulation.

The joint work of I.V. Ponkin and A. I. Redkina highlights a number of the legal issues created by AI and notes the absence of global regulation in this field. It postulates that the transnational nature and use of AI necessitates sustained global cooperation and standardisation with respect to regulatory frameworks. To this end, the article focuses on emerging legal issues in AI governance.

“The normative legal consolidation of the autonomous status of artificial intelligence can and will necessarily entail [its allocation] as a special form of personality and, accordingly, about its rights.”

The authors argue that it would be erroneous to extend the inalienable rights of people to AI. Rather the legal status of AI systems must be determined by reference to its functionality, features of implementation, and measure of autonomy. Indeed, it would make little sense to impose the same requirements on “smart” household appliances as financial transactions systems. The article draws attention to an approach gaining traction in New Zealand whereby legal framework impose different approaches and principles by industry. For example, imposing higher minimum standards for AI in the defence industry than in education.

Ponkin and Redkina also note the issue of responsibility for AI. Specifically, “who should take responsibility and compensate for the damage caused by the actions of artificial intelligence?” (Čerka, Grigienė & Sirbikytė, 2015). Again, this creates new issues in different industries:

“[F]or example, in the case of using artificial intelligence in medicine, the question arises as to what extent doctors can delegate tasks related to medical diagnostics to intelligent systems without exposing themselves to the risks of increased responsibility if the system makes a mistake.”

The academics concluded that the autonomy of AI will come to define its legal status and identified the elements of autonomy that effective legal frameworks must address.

The article is available in Russian here.