Law & Robots
The research projects covering the emerging field of “Law & Robots” cover various problems resulting from the employment of intelligent agents in all areas of life.
Legal scholars initially responded rather slowly to the challenges of digitalization and the new generation of “intelligent agents”. However, the significant legal risks attached to the production or use of robots and the possible hampering of further advancement if these problems are unresolved triggers more and more legal research in the area.
With the emergence of a vast variety of robots – ranging from software agents in the internet to robot cars – the public became aware of more problems: Intelligent agents automatically generate and save all data, enabling them to learn from identifying and deciphering patterns, but also simultaneously screening the activity of any user and/or ancillary persons. It is yet unclear who ultimately has the authority to determine the use of data. Moreover, the embedment of smart technology in all areas of life carries the risk of unauthorized persons operating our environment to their advantage – in an internet of things hacking poses a problem much bigger than in a traditional environment. And more legal problems are starting to emerge as smart devices interact increasingly with humans.
Researchers at the Basel’s law faculty have analysed such problems under the umbrella of NFP 75 Big Data, For further information see:
Overall, the robot’s ability to gather a huge amount of data, process it and search for patterns to which it reacts, and to learn subsequently by analyzing the response with other information opens doors to innovation. But it also raises basic questions with regard to liability, privacy and safety, and fundamentally requires a legal positioning of robots which are observed increasingly in relation to humans, yet their mode of operation is purely mechanical. Even in legal debate, some scholars advocate a place for robots somewhere between man and machine. Such reasoning appears preposterous at first sight. It indicates, however, the depth of problems, even when only looking at the question of liability. The complex structure needed to make a robot function has triggered a lively debate about legal responsibility, should a smart device – such as a robot car – cause damage, by, for example, running into a group of children. Who is to be held liable? The machine (as an entity with assets), the man behind the machine (the producer or user) or nobody (as we all have to carry the risk of innovation)? The lack of a clear answer as to who is to be held accountable when the robot induces a wrong hampers innovation. This became obvious after the car industry presented semi-automated cars, but held off with autonomous cars. As important as problems of liability are, as has been explained, they are only one of the legal problems arising with the rise of the robots – others are: The robot’s inevitable capacity to automatically accumulate data on people’s (and other robots’) actions and our understanding of the needs for privacy, the robots (and simultaneously the associated humans) vulnerability to hackers, the likely social changes to come in near future when man interacts with efficient machines lacking a need for endearment or self-determination, and acting without an ethical code. All these issues give rise to new questions, which are partly connected to research already conducted at the Faculty of Law and partly touch on new topics.
Topics for Legal Research
Scholars at the law faculty research future legal challenges when robots interact with humans. Part of the research program is an SNF funded project exploring “Human-Robot Interaction: A Digital Shift in Law and its Narratives? Legal Blame, Criminal Law, and Procedure”. Because robots are on the rise. Increasingly, we cooperate with standalone machines that help us with our chores, such as the automated vacuum cleaner or lawn mower, as well as with programs that are integrated into everyday objects, like driver-assistance systems in modern cars. These robots share the responsibility of a task and, at the same time, gather and process information in order to carry out actions autonomously, based on processing large amounts of data and machine-learning techniques, including Artificial Intelligence (AI).
A PhD project looks at
Smart safety devices in modern cars as well as intelligent tools in surgery provide poignant examples of human-robot collaboration that foreshadow a number of specific effects upon penal law. Within the domain of substantive criminal law, the demarcation of a negligent act from an intentional crime could change entirely if the mens rea could be inferred from a person’s response to (ro)bot advice. For instance, if a drowsiness detection system alerts a driver to take a break but the driver continues, eventually causing an accident, courts may be inclined to infer negligence or even intent as a result of the driver disregarding the advice. Nonetheless, criminal procedure might not grant an adequate defense to human drivers.
Another PhD project analyses
In criminal proceedings, human testimony – most notably a defendant’s submissions –, will have less significance, while machine-based evidence will gain traction. This includes evaluative data from machine systems that collect information via sensors and can make their own evaluation of a situation based on this collected information. The operations of such systems are very complex. This leads to various problems that need to be clarified if evaluative data is to be used as evidence in a criminal trial, especially since the defense must be allowed to thoroughly examine such “robot evidence”. For example, if a drowsiness detection system assesses a driver as sleepy, how can the defense challenge this machine-evidence if it is presented against the driver in court? Finally, verdicts will have to explain how machine-generated data was evaluated against human statements to justify an acquittal or conviction.
For further information see: