MACHINES ARE TAKING OVER — ARE WE READY?

Citation(2021) 33 SAcLJ 10024
AuthorKatrin Nyman METCALF BA, LLM, PhD (Uppsala University); Adjunct Professor of Communications Law, Tallinn University of Technology. Tanel KERIKMÄE BA (Tartu), LLM, LL.Lic (Helsinki University), PhD (Tallinn University); Professor of European Legal Policy and Law & Technology, Tallinn University of Technology. “Information Technology is a science at the service of man, it must not infringe on human identity, the private life of the individual, human rights, individual or collective public freedoms.”1
Publication year2021
Published date01 December 2021
Date01 December 2021
I. Introduction

1 Most of us have seen films with robots taking over the world or at least dominating in social relations.2 It is easy to have these images in mind when thinking about whether there should be legal rules to govern the activities of any form of machines, including robots that are programmed to be capable of carrying out a complex series of actions automatically. While some machines are quite straightforward and carry out the commands they were given, others, including those that deploy machine learning, may act beyond the scope of their original commands. This is a popular view taken of artificial intelligence (“AI”). The authors would like to draw attention to how, despite a large amount of research and writing on AI, the issue of a globally recognised definition of the research object remains a work in progress. In an interdisciplinary context, involving different branches of social sciences (eg, law) as well as information technology, it is especially difficult to agree on a common definition. Instead, we find various discipline-based perceptions of what exactly it is that is analysed. In some contexts the term “machine learning” is preferred, as more exact, but the legal debate tends to use AI — albeit without a clear definition.

2 The European Union (“EU”) Commission's communication on AI states: “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions — with some degree of autonomy — to achieve specific goals.”3 The EU High Level Working Group on AI has further developed this definition.4 In a 2020 White Paper of the Commission5 it is stated that a key issue in the creation of a specific regulatory framework on AI is to determine the

scope of its application and a clear definition. The mentioned definition is to be seen as a starting point. In the White Paper, it is also stated that: “Simply put, AI is a collection of technologies that combine data, algorithms and computing power.”6 The White Paper stresses that in any new legal instrument to be developed, the definition of AI will need to be sufficiently flexible to accommodate technical progress while being precise enough to provide the necessary legal certainty. The EU “Ethics Guidelines for Trustworthy AI”7 focuses on ethical requirements of AI systems, while not defining the object itself clearly. A recent attempt in a different forum to create global standards on AI technologies is the United Nations Educational, Scientific and Cultural Organization/International Research Centre on Artificial Intelligence draft text of a recommendation on the ethics of AI. Also in this context, the question of a definition is still debated.8

3 This article focuses on different aspects of machine learning, mainly those that fall under the imperfectly defined term of AI. The rapid increase in applications and deployments of machine learning makes the topic of regulating such activities appear both attractive and urgent. However, it is perhaps less exciting to learn that “thinking machines” exist in various contexts already for some time and fit quite well into existing legal rules. We have autopilots in airplanes, driverless trains, and many kinds of industrial robots. Incremental change with small steps toward more automation (like for cars for example) is more common than giant leaps from completely “human-made” decisions to full automation. In cases of gradual change, legal regulation normally manages to keep more or less in step with the technical changes through regular updates of rules or new interpretations of them. However, even if it is possible to de-dramatise the question, it is nevertheless true that the pace of technological change is getting faster and faster, and the aspects of our daily lives where automation, including AI, plays a role are getting ever more diverse. There are constantly new questions regarding how the law should deal with this new reality.9

4 In this article, the question of the legal approach to the use of AI is divided into two main parts: the specific questions about liability, protection of people in their roles as consumers, patients, travellers and so on; and the fundamental, ethical questions about whether AI should be allowed to do everything that it is technically capable of doing. These different aspects are clearly linked. The second, fundamental, question is primarily one of policy rather than of law. Lawyers can and should nevertheless make an important contribution to the debate by pointing out if and how fundamental legal values such as protection of rule of law and human rights may be affected by AI. Furthermore, if policy decisions are adopted that set limits to what AI can or should do, there will be a need for appropriate legal instruments to properly implement the chosen direction. Lawyers will interpret and implement these, making sure that the restrictions are necessary and proportional. Still, the authors would like to highlight that the basic and very important decision on whether AI should be limited is not a question of applying law. It is also not a question of which law provides the answers as to how and why it should be done. Instead, it is a philosophical and ethical question.

5 This article deals with the practical, the fundamental and the policy-oriented questions regarding whether, why and how AI should be limited by law. What does current law say about how AI fits into our societies? Are there reasons not to let machines do everything they can do? If so, what are these reasons and what are they based on? How can limitations be made and implemented — if they can be effectively made at all? In order to give a fuller picture of the topic, some examples of the “everyday” rule-making for AI will be mentioned, and in this context, examples from Estonia will be used to make the topic practical rather than just theoretical. Estonia is among the world leaders in e-governance and thus has studied the use of technology in society for several decades. Estonian e-governance does not rely primarily on AI, but in a system of governance which utilises information and communications technology (“ICT”), it may be a logical next step to increase the amount of automated decision-making. Some practical legal changes in Estonia, which are in many instances similar in other countries, will be considered before the authors proceed to the more fundamental discussion about the role of AI in society.

II. Mapping the need for change
A. Basic areas and questions of law

6 To accommodate the expanding use of AI, in legislative activities different steps are necessary.10 These include:

(a) analysing the legal system in order to identify and if required remove such norms that do not fit with the use of AI (or other technology) and thus per se prevent use of technology and hinder innovation;

(b) based on the above-mentioned analysis, ensuring that the law in force adequately reflects the use of AI and that the use of AI is defined with sufficient clarity as far as responsibility is concerned; and

(c) drafting new laws or other rules and — if necessary — restricting the development and use of AI.

7 The multitude and variety of ways in which technology can be used means that there are very many different areas of law that should be evaluated. This includes acts regulating legal procedures (administrative or criminal), sector-specific laws for different areas (transport, health care, different industries, etc), consumer protection law, criminal law, contract law, and so on. Technology should not be the main determining factor for what kind of regulation is needed and suitable. As for the first point above, on removing hindering norms, it is evidently necessary to evaluate what the reason is for the norms, and whether there is still a need for these norms. If not, they may have to be amended, or entirely new norms with a similar aim may need to be passed. Technological advances should not be prevented, but if the norms provide important legal protection, this protection should nevertheless remain. One example could be the need for notarised signatures in certain contexts, to underline the importance of a transaction and to guarantee that not only has the right person signed a document, but that he or she has also understood it. Perhaps in such a case, an automated decision is not suitable even if this blocks innovation towards using only digital signatures or even, in the future, no “man-made” signatures at all. In other cases, the requirement of a specific form or something similar that hinders a digital or automatic process may just be there because of tradition and does not serve any specific

purpose, in which case the provision can be abolished. Here, the example can be rules on forms of documents like different coloured paper or the use of certain colours of ink.

8 In general, the required careful overview and analysis of many different areas of law should lead to the conclusion that there is no need for too much specialised legislation or regulation for “digital” matters or AI.11 The most important factor should be what certain rules are made for, and what the situation being regulated is: the rules should be as technology-neutral as possible. Technology should be seen primarily as a tool. This tool may mean that some special rules are needed, but it should not change the fundamental principles of the legal system.

9 Many of the special questions caused by new technologies are of a horizontal nature — meaning that they should not be regulated for specific sectors or issues but fit better in legislation that applies across many different areas. Legislation on digital identities that can be used for different legal transactions is one example. Another general question highlighted by the advancement of AI includes determining whether there is an unavoidable need to require the direct participation of a natural...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT