Legal Business

Sponsored briefing: Artificial intelligence challenged by law and regulations: an odd legislative blank

Afrique advisors discuss both the challenges and opportunities that AI brings to the legal system in Morocco

Artificial intelligence (AI) and robotics are rapidly growing fields that have the potential to transform various industries and sectors, including law. They have become the most outstanding technological trends of our century.

The ongoing process of digital transformation is being accomplished in part with the use of AI, an interdisciplinary technology that aims to use large data sets (Big Data), suitable computing power, and specific analytical and decision-making procedures in order to enable computers to accomplish tasks that approximate human abilities and even exceed them in certain aspects.

We are now 70 years beyond their inception1 and we can recognise that we have made tremendous progress as we are no longer dealing with simple programs that can interact with humans, or programs that can treat small diseases. Now we are approaching the so-called “Strong AI” capable of autonomous thinking, adaptation and making decisions the same way a human being would.

Thus, the rapid growth of their use and their development is bringing new challenges, sometimes difficult to cope with for our society. This situation requires different legal treatment of these technologies, as robots and AI are likely to increase their interaction with humans in a wide range of areas. Morocco, like many other countries, is grappling with the legal implications of AI and the need to regulate its use.

Undoubtedly, the statutory law cannot avoid the evolution that AI and robotics have induced and they will certainly have a significant consequence on its classical notions (liability, property rights, intellectual ownership, data protection, etc.). Nowadays law-makers endeavour to understand these new systems in their relationship with the human being2, however, their interactions are sometimes ambiguous, as the AI systems increasingly aims to gain autonomy over the human being to shape their own identity in a symbiotic manner.

Such situation recalls the need to consider the legal status of the artificial intelligence, as an emerged issue that should interest the public policies.

Indeed, the broadening of artificial intelligence’s capacity and the purposes for which it might be used is not merely fraught with the opportunity but also with the potential danger. The following is a short assessment of the regulatory and legal challenges posed by AI.

I- AI and robotics: A blurred legal status

Given the current progress of AI and robotics technologies dominated by techniques of “Machine Learning” and “Deep Learning”, their capacity to learn autonomously from their own experiences, and their interactions with the environment in a unique and unpredictable ways, one could enquire whether it is sufficient to consider the basis surrounding the principles of the laws of persons and property in order to ascertain the status of AI among the summa divisio of the law.

Generally speaking, the summa divisio of law has a binary vision. First, there are persons: the subjects of law who have legal personality. At the opposite end of the spectrum, we have property which does not have the so-called “legal personality”. Indeed, property is appropriable by the persons entitled thereto. Individuals include natural persons (human beings) and legal persons (states, corporations, international organizations, NGOs, etc.). Anything that is not a person is legally a property. However, this does not necessarily pertain to robotics and artificial intelligence.

As the result of an IT programming activity that expresses a transcription of coded information, AI and robotics are, above all, creations of the mind. As such, they are by definition an intangible asset. Hence the recognition by the World Intellectual Property Organization (WIPO) of the possibility of filing patents related to AI reveals its intellectual property nature.

According to the WIPO Technology Trends Report (February 2019), since “the 1960s, inventors and scientists have filed patent applications for nearly 340,000 inventions pertaining to artificial intelligence”. Such statistics seem to be an assertion of the legal status of these intelligent entities as subject matter, and far from being deemed to be its subject3.

However, this position has been called into doubt following a lengthy battle initiated by Stephen Thaler4 before different national patent offices throughout the world.

In the above 2018 case, two European patent applications have been filed by Thaler5, with the particularity that these two patent applications is that both designate as inventor an AI algorithm called by its creator, “DABUS6.

At the time, the European Patent Office (EPO) had refused to grant the status of inventor to an intelligent machine on the grounds of lack of legal personality. The same position has been upheld by the European Patent Office7, the Intellectual Property Office of the United Kingdom and the Patent and Trademark Office of the United States. Nevertheless, the South African Patent Office and the Australian Federal Court8 decided to grant this AI the status of inventor, thus adopting a completely different position and turning all standards upside down.

This worldwide debate is a perfect illustration of the fact that AI is no longer just an end in itself, but rather a tool for creation – and sometimes the creator as well – capable of learning from the introduced data and developing into an autonomous decision maker beyond any human involvement. Indeed, creations generated by intelligent entities have become a widespread reality and it has been difficult to distinguish between human creations and those created by an artificial intelligence.

In the same vein, a well-known painting, “The New Rembrandt”, was created by an AI which was able to extract the secret of the Dutch painter based on his existing art works. Experts have stated that, had they seen the AI created painting in a museum, they would have thought it painted by Rembrandt himself9.

Another field of example in which the AI was considered equal to persons was the attribution of citizenship rights. In 2017, Saudi Arabia announced that robot Sofia, who identifies herself as a woman, was granted the Saudi citizenship. In the same year, Japan granted a residence card to the Shibuya Mirai bot cat under a special regulation10.

All these examples provide a perfect illustration of the evolution of AI from an owned property to a subject that acts within the summa divisio. A reality that science and scientists acknowledge, yet the legal realm is quite distant.

II- Emergina AI is a New Subject of Tort Liabilities

The previous lines reveal that AI can represent a crucial contribution to the enhancement of the human capabilities in terms of generated creations or in carrying out functions that were previously the exclusive preserve of humans. However, the other side of the coins is that these intelligent entities can be involved in causing accidents or damage as well. For instance, one of Google’s cars has been the cause of an accident before11. Damage was also caused by an AI-assisted medical diagnosis (IBM’s Watson)12.

Hence the need to consider the tort liability framework for damage caused by an AI or robot, whereby their conduct may bear implications from both contractual and extra-contractual liability perspectives.

In practice, these technologies involve many actors such as the programmer, the data provider, the platform owner and the user. However, the positioning of the users at the front line of the process often makes them the first rank liable.

One could wonder if such positioning legitimate, particularly considering the development of certain autonomous and cognitive functionalities (such as the capacity to learn from experience or to make near-independent decisions), which make these robots more likely to be considered as actors who interact with their environment and can significantly alter it13.

In such a situation, the issue of the legal liability in case of a damaging action by a robot is a key concern.

Scientists generally agree to classify AI as two categories: soft AI, which merely imitates a pre-established behavior that a human would have had in a given situation, and strong AI, which is endowed with a high degree of autonomy in making decisions and which is similar – thanks to the progress of cognitive sciences – to human behavior in its most particular features.

As a matter of fact, intelligent entities based on soft AI technology does not raise any problem, insofar as it is considered merely as a tool that performs tasks or carries out operations according to the instructions of its programmer or its user, and therefore corresponds to the definition of “things” under the scope of positive law.

Consequently, the application of the liability for “things in possession”, embodied in Article 88 of the Moroccan Civil Code14, which provides that “everyone must be liable for damage caused by things in their possession”, remains a suitable approach.

However, the notion of legal guardianship, based on the theory of risk management, seems to bring up further issues since Moroccan law draws a distinction between legal guardianship, which belongs to the owner of the thing, and ordinary material guardianship, which belongs to the person who has the power of direction and control at the time of the damage. Therefore, no one can deny that in such a context, the notion of guardianship and risk management must be interpreted differently.

Regarding technologies based on so-called strong artificial intelligence, the issue gets much more complicated, considering their emerging autonomy and the immateriality and unpredictability of their actions, as they can cause damage regardless of any control or influence by a human. Indeed, the solutions provided by the theory of risk management and guardianship of things, appear unable to justify the faulty contribution of any human.

Therefore, it follows that the increasing autonomy of robots brings us back to the legal nature of these machines, which vary depending on their type. The more an intelligent machine is autonomous, the less it can be considered a “thing” under human control and must bear the responsibility for the damage it causes, according to the terms of the theory of guardianship of things as it is conceptualized under Moroccan law.

It seems that the current statutory liability rules are no longer sufficient in this regard and new policies and regulation are required to clarify both the legal nature of these entities and also the liability system of the various actors for the actions or inactions of a robot which cannot be attributed to a human factor.

Actually, these two issues of positive law, relating to the legal status of intelligent entities and the liability regime applicable in case of damage or injury they cause, are in all likelihood inter-related insofar as each one has an impact on the other and indeed on other legal fields, in particularly intellectual property rights and the protection of personal data.

At this point there is no doubt that the established law is naturally applied, although not by choice. Nevertheless, it must be enhanced by new and specific responses by the legislature, whether by creating appropriate regulations or by adapting and modulating existing provisions.

Recalling ultimately that in terms of the connections linking law and technology, it is technology that leads the process, as expressed by an eminent author, La Paradelle, who once said: “It is not the philosophers with their theories, nor jurists with their definitions, but rather engineers through their inventions and discoveries that establish the law and, above all, the progress of the law”.

The main challenge is therefore for the legislators to address an effective regulatory approach that combines the prevention of potential risks along with the preservation of innovation and its progress.

Overall, AI presents both challenges and opportunities for the legal system in Morocco. While the lack of specific regulations may hinder the development of AI, it also provides an opportunity for the country to shape its legal framework in a way that encourages the responsible and ethical use of the technology. It is therefore crucial that policymakers in Morocco take a proactive approach to developing a legal framework that addresses the unique challenges and opportunities presented by AI.

Authors


Rabab Ezzahiri
Attorney at Law, Casablanca Bar Association and PhD Candidate


Maroua Alouaoui
Associate


  1. Chris Smith, “The history of artificial intelligence”, University of Washington, December 2006.
  2. Wolfgang Hoffmann-Riem, “Artificial Intelligence as a Challenge for Law and Regulation”, ResearchGate, January 2020.
  3. Ryan Abbott, Therefore I Invent: Creative Computers and the Future of Patent Law, 57B.C.L. Rev.1079 (2016).
  4. Philippe Schmitt, “Brevet DABUS et Intelligence artificielle : le 25 novembre 2019 n’est pas le jour de la singularité creative”. November 2019. Village des juristes. Available at: https://www.village-justice.com/articles/brevet-dabus-intelligence-artificiellenovembre-2019-est-pasjour-singularite,33059.html (Last accessed on April 13, 2023)
  5. EP 18 275 163 and EP 18 275 174.
  6. Matthieu Objois, Lucas Robin, Inventeurs IA “l’office européen des brevets remet les pendules à l’heure dans la décision DABUS”. Village des juristes. Available at: https://www.village-justice.com/articles/inventeurs-office-europeen-des-brevetsremet-les-pendules-heure-dans-decision,33546.html (Last accessed on April 13, 2023).
  7. EPO decision rejecting two patent applications naming a machine as inventor on 28 January 2020. Available at: https://www.epo.org/news-events/news/2020/20200128.html (Last accessed on April 13, 2023).
  8. Federal Court Of Australia, Thaler v Commissioner of Patents [2021] FCA 879.
  9. Andres GUADAMUZ, “L’intelligence artificielle et le droit d’auteur”, OMPI | Magazine, octobre 2017. Available at :L’intelligence artificielle et le droit d’auteur (wipo.int. (Last accessed on April 13, 2023).
  10. Atabekov, O. Yastrebo, “Legal Status of Artificial Intelligence Across Countries: Legislation on the Move”. European Research Studies, Journal, Volume XXI, Issue 4, 2018.
  11. LeBeau, Phil. “Google’s Self-Driving Car Caused an Accident, So What Now?” CNBC, 29 Feb. 2016, Available at: https://www.cnbc.com/2016/02/29/googles-self-driving-car-caused-an-accident-so-what-now.html. (Last accessed on April 13, 2023).
  12. Bensoussan, Alain, and Jeremy Bensoussan. “IA, Robot et Droit.” Lexing – Technologie Avancées & Droit : Théorie et Pratique, 2019, p. 139.
  13. Margaret A. Boden, “Computer Models of Creativity”, Association for the Advancement of Artificial Intelligence, 2009, p. 23.
  14. Moroccan Dahir – Code of Obligations and Contract, 12 September 1913.

Return to the Disputes Yearbook 2023 menu