Robot ethics is the application of ethics to robotics. It deals with the development, manufacture and use of robots. With the increase of robots in human life and the realization that robots are not just pure tools, but agents, companions and avatars, the question arises about an assessment of the ethical challenges facing humans. In 2004, the first international symposium on robot ethics was held and the term “Roboethics” was coined. Even before that, the topic was covered in detail in science fiction, for example, within the framework of the laws of robots.
The term “roboethics” is used to define an approach that seeks to determine the appropriate behavior that human beings should adopt in their relations with robots and other artificial agents. It is to be distinguished from a “robotic ethic”, whose purpose is to teach good and evil to robots or, in other words, to develop moral rules for artificial agents. The partial autonomy of robots would justify this distinction between the ethics of artificial intelligence and the ethics of algorithms. Before interacting with the world, AIs must be morally programmed, in anticipation of any new situations they may face. There are different evaluations of robots in the ethical dimension:
- Roboethics is comparable to the ethics of any other mechanical science
- It is assumed that robots have an intrinsic ethical dimension because, as symbolic products of humans, they can expand and improve the ability of humans to act ethically.
- Robots will not only have consciousness, but will transcend human dimensions in morality and intelligence.
The Main Fields of Questioning
As roboethics is human-centered, it must respect the widely accepted principles of the of Human Rights:
- Human dignity and human rights.
- Equality, justice and equity.
- Respect for cultural diversity and pluralism.
- Non-discrimination and non-stigmatization.
- Autonomy and individual responsibility.
- Informed consent.
- Protection of Personal Information.
- Social responsibility.
Law and Ethics of Artificial Intelligence
Parallel to the study of robotics from an ethical perspective, a jurisprudential view of the specific issues involved in the use of robots is developing. The main focus is on civil and criminal liability as well as the shifts in liability attribution necessary due to an increased degree of autonomy. The use of robots to perform state tasks in areas that are particularly sensitive to fundamental rights, for example in prisons, also raises special public law issues, for example in the area of data protection, which go beyond the requirements for product safety and approval criteria. In the future, the use of robots will present society with a multitude of previously unsolved legal problems. This discussion is most likely to be conducted in the context of autonomous cars, which may also have to make a life-and-death decision.
In parallel with robot ethics, there are also approaches to ethical principles for the development, use, research, design and dissemination of artificial intelligence. These can relate to the current use of algorithms – for example in economic processes – or the development of advanced artificial “superintelligence” – including those without any robotics or “bots”.