Law & Robots
The research project “Law & Robots” covers various problems resulting from the employment intelligent agents in all areas of life, which is currently driving the so-called digital revolution.
In March 2014, when the magazine The Economist stated that the long anticipated “rise of the robots” finally may take effect, due to economic and technological development as well as more investment and imagination, one could not say the same about legal research on the topic. Legal scholars initially responded rather slowly to the challenges of the new generation of intelligent agents or autonomous agents. However, the significant legal risks attached to the production or use of robots and the possible hampering of further advancement if these problems are unresolved triggers more and more legal research in the area.
With the emergence of a vast variety of robots – ranging from software agents in the internet to robot cars – the public became aware of more problems: Intelligent agents automatically generate and save all data, enabling them to learn from identifying and deciphering patterns, but also simultaneously screening the activity of any user and/or ancillary persons. It is yet unclear who ultimately has the authority to determine the use of data. Moreover, the embedment of smart technology in all areas of life carries the risk of unauthorized persons operating our environment to their advantage – in an internet of things hacking poses a problem much bigger than in a traditional environment. And more legal problems are starting to emerge as smart machines interact increasingly with humans. Many already have triggered important debates about basic concepts of law.
Overall, the robot’s ability to gather a huge amount of data, process it and search for patterns to which it reacts, and to learn subsequently by analyzing the response with other information opens doors to innovation. But it also raises basic questions with regard to liability, privacy and safety, and fundamentally requires a legal positioning of robots which are observed increasingly in relation to humans, yet their mode of operation is purely mechanical. Even in legal debate, some scholars advocate a place for robots somewhere between man and machine. Such reasoning appears preposterous at first sight. It indicates, however, the depth of problems, even when only looking at the question of liability. The complex structure needed to make a robot function has triggered a lively debate about legal responsibility, should a smart device – such as a robot car – cause damage, by, for example, running into a group of children. Who is to be held liable? The machine (as an entity with assets), the man behind the machine (the producer or user) or nobody (as we all have to carry the risk of innovation)? The lack of a clear answer as to who is to be held accountable when the robot induces a wrong hampers innovation. This became obvious after the car industry presented semi-automated cars, but held off with autonomous cars. As important as problems of liability are, as has been explained, they are only one of the legal problems arising with the rise of the robots – others are: The robot’s inevitable capacity to automatically accumulate data on people’s (and other robots’) actions and our understanding of the needs for privacy, the robots (and simultaneously the associated humans) vulnerability to hackers, the likely social changes to come in near future when man interacts with efficient machines lacking a need for endearment or self-determination, and acting without an ethical code. All these issues give rise to new questions, which are partly connected to research already conducted at the Faculty of Law and partly touch on new topics.
Topics for Legal Research
At the moment, five sub-topics appear to be of special interest for legal research:
First, problems of accountability and liability which emerge from the employment of robots. Prominently autonomous vehicles have triggered a debate on civil and criminal responsibility should robot cars cause damage. But the same issues arise with regard to the intelligent environment (like smart streets) or the use of robots in hospitals, for instance, performing surgery. The question is whether one should allocate responsibility only with the men behind the machine or whether the machine itself could become the subject of liability. Liability is a crucial concept for any legal system, therefore interesting answers are found in the past and present. In Roman law, for instance, a master was liable for acts of slaves depending on a complex mix of parameters. In today’s torts law, one could think about establishing a «robot asset» which covers possible damages. In criminal law, one could draw parallels to corporate liability which also aims at responsibility beyond individual conduct. With regard to these problems, the Faculty of Law plans to initiate interdisciplinary research with either the Faculty of Medicine or private business.
Secondly, the employment of robots raises issues of property rights in various ways. On the one hand, the deployment of robots in an urban environment or in nature may affect the well-being of others. This is evident when drones – used, for instance, for commercial deliveries – fly over residential buildings or resorts. On the other hand, the data generated through the use of robots raises the question: Who owns that information? Who has the right to determine the use of the data automatically (or deliberately) generated by intelligent agents when activated? This question arises with the employment of all intelligent agents which necessarily all generate data concerning their users and other ancillary persons, be they robot cars, intelligent streets, smart glasses, medical devices, etc. The debate often centers around data protection issues but also touches on property rights, personal rights (legal questions concerning the human dignity) and many other areas of law: The processing of so-called “Big Data” produces information valued by different players: e.g. advertising companies aiming at personal advertising; police and law enforcement authorities compiling profiles; insurance companies trying to sort out liabilities after a car accident; or even a person’s movements in a certain situation. In this field the Faculty of Law wishes to involve data protection authorities and internet specialists.
Thirdly, the use of intelligent agents – especially if they are embedded in a larger environment such as robot cars, smart houses or, in the future, the internet of things – raises concerns about cyber safety, and again criminal liability – especially if an environment is prone to infringements because of ill-protected embedded smart devices. With the advancement of semi-automated cars, for instance, computer scientists started to propose cyber safety ratings, and with regard to the development of the internet of things, computer scientists have proposed measuring humans as data clusters for which one must be able to provide a proper level of security. One of the crucial questions again is responsibility: Is it the duty of the producer to provide a security frame easily handled by users or is it the user’s task to protect the environment for which he or she is legally responsible. In this area, the Faculty of Law plans to cooperate with the Faculty of Economics as well as with the Department of Computer Sciences.
Fourthly the question arises as to whether robots could be used in legal practice. Intelligent Agents and their machine-learning abilities are currently employed for monitoring cash flows with the goal of detecting money laundering or for document review in law offices in the U.S. (e.g. Diligence Engine and Ebrevia). Robots can be used in other fields relevant for practising law in Europe. For instance, in any field in which the recognition of certain patterns in data, whether in smaller clusters of data or in Big Data, is of importance, robots may help to detect legally relevant facts. Reading robots, for instance, could identify patterns of corruption which lead to further investigations. In this field, the Department of Law plans to cooperate with the Basel Institute on Governance as well as with computer scientists.
And, last but not least, a discussion has arisen with regard to social and legal consequences of human-robot interaction, as smart technology (inherently) lacks human features that back up autonomous rule-finding, like emotions, intuition and empathy, and thus also lacks an ethical code beyond programmed rules. A distinct example of this has been the use of armed drones in warfare provoking a controversial debate about accountability for decision-making in armed conflicts. But also the use of robot cars raises the “trolley problem” of whom to save in a dilemma of an imminent fatal accident for either the driver of the car or bystanders, which is yet unsolved. Furthermore, robots in human disguise pose further ethical and legal issues, as the experimental use of therapeutic robots catering to emotional needs of patients suffering from Alzheimer’s Disease or other forms of dementia has shown. This has generated a discussion over the ethical dimension of using robots in situations where a human traditionally interacts with another human. In addition, this raises also legal questions, especially regarding human rights and the human dignity. To address these issues, the Faculty of Law plans to cooperate with experts from various fields, including Medical- and Bioethics.