Becoming Human: Legal Personality for AI

Hamza Imran
9 min readJan 10, 2020

It is the year 2023, the first time, an autonomous vehicle navigating the city lanes hits and kills a pedestrian. A lawsuit is sure to be followed. But which laws would govern and establish the liability? Would the manufacturer of the self-driven car be liable or the autonomous car which made decisions on its own? It remains a mystery. The following scenario is becoming one of the most significant questions as we delve into the future of AI. Artificial Intelligence (AI), specifically with machine learning techniques is making the robot to learn from its experience. With the advancement in machine learning techniques, there exist a possibility that the robots might develop novel solutions that ultimately discard humans’ decision making power. Moreover, with artificial intelligence possessing the ability to learn and reach solutions on itself based on unsupervised machine learning, it will be impossible to ascertain in advance whether such a class of robots will damage any property or cause any injury. The foreseeability of such an occurrence would be negligible and hence it would be impossible to determine the liability for damages occurring from actions of an AI. These autonomous learning machines, therefore, would create circumstances wherein the manufacturer could not be held morally or legally liable for the decisions that machine took on its own because of the inability of the programmer to foresee the occurrence of such harm in the future. Previously, the human operator or product manufacturer would be under investigation in the occurrence of any injury. Their conduct would be the key focus; however, with the emergence of artificial intelligence, the conduct of the manufacturer becomes irrelevant.

The Era of Artificial Intelligent robots is upon us.

In light of such advancement, there is a need to have a proactive approach rather than a reactive approach to address AI liability. The paper explores some of the key issues pertaining to “artificial intelligence” and legal liability. Firstly, the paper briefly explains the meaning of artificial intelligence for the purpose of this paper. Secondly, the paper provides the current legal structure that could incorporate the legal liability of artificial intelligence; in doing so, the paper classifies whether AI could be considered a product or a service. Thirdly, the paper identifies the inherent flaws that stem from incorporating AI base technology under the current principles of negligence and strict liability. Fourthly, the paper provides a detailed analysis of the possibility of legal personality for Artificial Intelligence. Lastly, the paper explores the possible legal solutions to address AI liability.

The term “artificial intelligence” is the primary focus as the definition is widely debated by researchers. It is argued that anything that has the capability to impersonate human intelligence, using machine learning techniques, is to be termed as “artificial intelligence”; others think that artificial intelligence is a program which could imitate the way in which human thinking. Moreover, there are researchers who believe that artificial intelligence programs are complex information systems, possessing ‘true’ artificial intelligence being reserved for the meta-level decision making which is otherwise regarded as ‘wisdom’.[1] However, for the purpose of this paper, any computer system that possesses the capability to recognize an event and to make autonomous decisions through unsupervised machine learning is known as artificial intelligent system. In other words, the system has agency and full autonomy to make a decision without human intervention is to be called an artificially intelligent system.

In an event where software or product is defective, or an individual gets injured as a consequence of using the product, the consequential legal proceedings usually allege the tort of negligence. Product liability claims are largely based on the theory of negligence or strict liability. It is argued by scholars that traditional tort law could be applied to AI as tort law adjusted to the crashworthiness of automobiles. If the agreement states that vehicle manufacturers would be responsible for the defect in their AI, the manufacturer could be held responsible. However, in the absence of such an agreement, the courts would have to determine where the fault arose. The court would have to look into a pertaining issue: whether the consumer had the autonomy over a product assisted by artificial intelligence, or artificial intelligence had ultimate autonomy over the product’s operation? When it comes to liability under tort law, it is essential to establish whether AI falls under a product or service.[2] Strict liability is followed if there is a defect in product design, manufacture or warnings which resulted in personal injury or property damages. Whereas negligence applies to services, for instance, data analysis to determine maintenance. There are three types of product defects: (1) design defects; (2) manufacturers defects; and (3) defects in marketing. The liability of AI would establish if it is considered a product and it has any above-mentioned defects. There are certain criteria for determining whether artificial intelligence is a product or service. Some courts differentiate between the two while positing that the software itself is a product but any information generated by the software on its own is not a product. Similarly, the Uniform Commercial Code argues that a mass-produced software would be considered as a product but a software specially designed for a customer is a service. While applying tort of negligence, some scholars argue that negligence standard should also apply to AI because AI is “stepping into the shoes” of humans.[3] Perhaps, in doing so, the ‘reasonable man standard’ could be tamed into a ‘reasonable computer’ standard. The idea may seem absurd but it could possibly a way forward for ascertaining the legal liability. The court while applying ‘reasonable computer standard’ might come up with standard questions, for instance, should a standard autonomous vehicle be able to avoid hitting a pedestrian?

When we follow the tort of negligence there are three simple elements that need to be proved. Firstly, that there existed a duty of care. Secondly, that duty of care was breached. Thirdly, there was a direct causation between the breach and the injury occurring to the plaintiff. In many jurisdictions, the burden would be upon the aggrieved party to prove the defectiveness of a product that caused the injury. However, it becomes difficult to identify the defects of autonomous robots which could function without a mechanical defect but it still causes property damage because of machine learning capabilities of artificially intelligent robot.[4] In some jurisdictions, the burden of proof is on the defendant to show that he took reasonable care. To determine negligence, the manufacturer is judged under the reasonable man standard. The reasonable man takes reasonable measures to prevent any foreseeable damage, if the manufacturer falls short of the standard, he is held negligent.[5] While we apply the same principle to artificial intelligence, it is impossible to blame the manufacturer because the harm could never be foreseeable. By the very nature of its design, the Artificial intelligent robots are products with intrinsically unforeseeable risks. The concept of Avant grade machine learning or unsupervised machine learning is the ability of robots to learn and explore new ways of interaction in the absence of the designer’s instructions. Therefore, the programmer could always exonerate from liability by providing a defense that at the time of manufacturing the product, the risks were unforeseeable. It is analogous to the idea that how a mother would never be able to foresee if her child would become a murderer in the future before or after it was born.

AI products manufactured by Google.

This gap bridges us towards the next point of our argument. The concept of general intelligence and its intrinsic unpredictability is most relevant to autonomous, thoughtful and intentional acts, which is the very definition of being a human. It is true that Artificial intelligent robots are something that our current legislative system has never encountered before. They are not property or persons. However, this difficulty also existed in English law while determining the status of ‘quasi-persons’. If we take the example of Romans, the foreigners were not protected by Roman law. The foreigners were not permitted to have full ownership of anything. Similarly, the children and women under Roman law had lesser legal status. Romans acquired slaves who had remarkable similarities with what one could argue to be the governing future law for Artificial intelligent robots. Slaves were considered lesser beings; however, the jurisprudence behind the law on slaves was astonishing. In the contemporary time period, the status of the slave is equivalent to the status of an agent; the slave functions and carry out business on behalf of his master. However, the Romans acknowledged that the slaves do not always follow the expectations of the master; therefore, it is unjust to hold the master responsible for the misconduct of the slave.[6] They were all ‘human beings’ and the Romans understood that they could be held responsible for their actions as well. Hence, although the law did not confer rights to slaves it was more prone towards imposing duties. Ironically, the rights and duties are, rationally, two sides of the same coin. However, the idea of slaves provides helpful criteria through which we could approach towards Artificial Personhood or providing a Legal Personality to AI.

In the 21st century, there exist a “rule of law” in our legal system that guarantees equal protection for all. However, it is not the case. There is a difference between ‘equal protection’ and ‘equal content’ of the law. The former is procedural and the latter is substantive. English law does neglect the latter; however, with good reason. As the English law has the equivalent of quasi-persons or ‘legal slaves’. For instance, it was a law that “a child not less than 10 but under 14 does not possess the capability to commit a crime”. However, it has been eradicated; however, with a rationale that we cannot hold an individual with diminished moral responsibility liable as we hold a normal human being liable. The concept of holding people liable for their actions stems from the hallowed idea of autonomy and personhood. Similarly, individuals who are insane or have a mental disorder and people temporarily impaired have diminished responsibilities. It is a defense and if pleaded could exonerate the liability with the rationale that these individuals do not possess the same mental consciousness for committing a crime. These make these individuals more of ‘quasi-persons’. They are a human beings; however, they do not bear the legal responsibilities as ‘normal human being’. Similarly, if the idea of legal personality is so vague in the current legal system, it could be argued that artificially intelligent robots could be given personhood as a ‘quasi-person’. The artificial intelligence uses neural networks, in order to come up with solutions as humans do. The term ‘normal human’ or ‘legal personality’ are self-creation which are delusional and fictional. Grating Artificial intelligence legal personality could come out as a valuable firewall between current legal persons and the harm that Artificial intelligence could cause.[7]

A way forward is to give AI personhood which would allow viewing machines as an independent person under the ambit of the law. The idea of vicarious liability would be solved as earlier mentioned the manufacturer could always be absolved from liability by taking the defense of foreseeability. If a lawsuit is established under the name of AI for negligence, the AI system would be required to be insured. In such an instance, the AI system would act as a quasi-juridical person. Moreover, there exist the idea of common enterprise liability theory which states that without giving personhood to AI, all the people responsible for the use and implementation of the AI system would be deemed as jointly liable. The author concludes with the remarks that providing artificial personhood to the artificially intelligent robots would be a new regime of liability for AI.

[1] Woodrow Barfield, ‘Liability For Autonomous And Artificially Intelligent Robots’ (2018) 9 Paladyn, Journal of Behavioral Robotics.

[2] Seongjo Ahn, ‘Artificial Intelligence And Criminal Liability’ (2017) 20 Korean Journal of Legal Philosophy.

[3] ibid (n 1)

[4] Hannah Sullivan and Scott Schweikart, ‘Are Current Tort Liability Doctrines Adequate For Addressing Injury Caused By AI?’ (Journal of Ethics | American Medical Association, 2019) <https://journalofethics.ama-assn.org/article/are-current-tort-liability-doctrines-adequate-addressing-injury-caused-ai/2019-02> accessed 17 November 2019.

[5] CHRIS TEMPLE, ‘AI-Driven Decision-Making May Expose Organizations To Significant Liability Risk | Corporate Compliance Insights’ (Corporate Compliance Insights, 2019) <https://www.corporatecomplianceinsights.com/ai-liability-risk/> accessed 17 November 2019.

[6] Edgar S. Shumway, ‘Freedom And Slavery In Roman Law’ (1901) 49 The American Law Register (1898–1907).

[7] Jacob Turner, Robot Rules (Springer International Publishing 2019).

--

--

Hamza Imran

Self-directed lawyer and advocate with comprehensive accomplishments leading legal and policy matters involving intellectual property and commercial law.