Classification of artificial intelligence technologies to determine the civil liability

The paper confirms a conclusion that there is no unified approach to the issue of civil liability for the actions of artificial intelligence. The authors propose to consider fault-based liability, liability regardless of fault and liability founded on a risk-based approach. To determine a type of liability for the actions of AI, the authors outline a classification of AI technologies on four grounds: autonomy, self-learning, feature and availability of data recorders. According to the authors, the most promising approach to the legal regulation of liability for the functioning of artificial intelligence technologies is a risk-based approach. The conclusions presented in the paper testify to the relevance of the topic of the research and prove the beginning of the formation of a scientific understanding of civil liability for the actions of AI in Russia and abroad.


Introduction
With the development of information technology and artificial intelligence (AI) technologies and robotics in particular, the improvement of the present civil law regulation has become an urgent task, and this has been stated in the Decree of the President of the Russian Federation [1]. The interdisciplinary perspective to this problem accepted in the given paper is attributed to the composition of the authors' team, which includes lawyers and specialists in the field of computer implementation of AI technologies. In the modern Russian legal doctrine there is no unanimity in respect to the legal nature of AI: a number of researchers, following the European legal thought [2], believe that it is necessary to form a specific concept of an e-person in relation to artificial intelligence [2], while others, identifying AI with a robot, assume that it can be both a legal object or a legal person [3], and finally, some suggest that a status of a legal person should not be assigned to artificial intelligence [4].
The results of the latest research done by European scientists and presented in the EU Expert Group's report "Liability for Artificial Intelligence and Other Emerging Digital Technologies" from November 27 2019 [6] prove that it is impossible to create a unified liability regime for various technologies because their characteristics are different.
The above mentioned testifies to the fact that at the present moment, given the lack of unified approaches to the legal nature of AI, as well as to liability for its actions both in the Russian Federation and in the European Union, the tasks of adapting regulatory control over social relations in the sphere of artificial intelligence, which were set by the President of the Russian Federation, are urgent and push for a decision

Methods
The author propose to confine this paper to determining an approach to liability for the actions of artificial intelligence. To cope with the task, the authors are supposed to classify AI technologies on the following grounds: 1. Ability to independently define a range of tasks and find solutions without human involvement.
(i) "Weak" ("passive"). Capable of performing pre-defined types of tasks and limited by these tasks. (ii) "Strong" ("active"). Capable of achieving or exceeding human intelligence and using its abilities to perform tasks applied to any specified tasks, similar to the human brain.
2. Ability to function without human involvement.
(i) Low autonomy, ability to execute the specified programs and procedures only at the user's command. (ii) High autonomy, ability to execute the specified programs and procedures without the user's active involvement.
It should be mentioned that the authors rather roughly divide the artificial intelligence technologies into two categories: with low autonomy and with high autonomy, whereas some more categories are singled out in practice. For example, SAE International singles out six levels of autonomy in automotive industry [7].
3. Ability to acquire new functions without human involvement.
(i) Self-learning. Has capacity to acquire new skills and abilities without the developer's direct involvement, to extract knowledge and production rules and then use them on the basis of information not provided by the developer at the initial training or when AI is updated. (ii) Untrained. Has no capacity to acquire new skills and abilities without direct human involvement.
(i) Monofunctional. AI is used to perform one certain function.
(ii) Multifunctional. AI is used to perform several certain functions.

Availability of data recorders.
(i) Software and hardware data recorders for storing the information about AI functioning are available. (ii) Neither software nor hardware data recorders for storing the information about AI functioning are available.
AI technologies themselves and machine learning methods may be considered as a sixth criterion, which does not mean this criterion is the sixth in the order of significance. Firstly, these are neural simulation technology, agent-based modeling technology including Swarm Intelligence technology, and secondly, optimization techniques such as evolutionary and genetic search.
Of course, another significant criterion for classifying AI technologies is the field of application: medicine, security, energy, production, entertainment, etc. However, the sphere of AI technologies application as one of the grounds for defining the model of liability is consciously ignored in this paper.
It is important to understand that not all of the above mentioned criteria that can be used to classify AI technologies are sufficient to determine the model of the developer's civil liability for the actions of AI. For example, at the present moment, all known AI technologies should be classified only as passive technologies. This is why, for the time being, the "Activity" criterion cannot be a ground for defining the liability model. In case a "strong", or "active", artificial intelligence capable of independent goal-setting, reflection and awareness of the consequences of its actions is created, the question of recognizing such an AI as an independent legal person will become relevant. However, as it is stated above, the report by the European Commission responsible for legal affair in the European Parliament and other sources [8][9][10] emphasize the current absence of necessity to recognize AI as a legal person.

Results
The authors propose to consider three basic approaches to regulating liability for the actions of AI: fault-based liability, liability regardless of fault and liability founded on the risk-based legal regulation. In the latter case, liability is assigned to the person who should have accomplished the duty of minimizing risks and avoiding harmful consequences. According to the authors, the significant criteria for determining a liability model are: autonomy, self-learning, feature and availability of data recorders. The approaches to regulating civil liability for the actions of AI proposed by the authors are outlined in the table below.

Conclusion
Despite the fact that the determined characteristics of AI technologies correspond to strict liability or liability regardless of fault, the authors believe that the most promising approach to regulating liability for the actions of AI is the risk-based approach, for example form developing industry standards, self-regulation, etc. This conclusion is based on the fact that with the development of AI technologies, it will become increasingly difficult to predict and control its actions, and in some cases this will become impossible. Strategies to manage the risks associated with the use of strong AI will have to be developed. An important role in addressing this problem belongs to the institution of civil liability. Definitely, with the development of scientific and technological progress, new AI technologies and, accordingly, new grounds for their classification will appear. It is important to note that in order to determine the type of civil liability, in addition to the criteria outlined in this paper, it is necessary to use the benefits protected by law (life, health, reputation, etc.); spheres and fields in which AI is used (medicine, security, entertainment, etc.); methods and technologies that are applied (artificial neural networks, expert systems, multi-agent systems, etc.); parties (developer, integrator, operator, service organization), etc.