Abstract
The aim of this paper is to suggest a framework for categorizing social robots with respect to four dimensions relevant to an ethical, legal and social evaluation. We argue that by categorizing them thusly, we can circumvent problematic evaluations of social robots that are often based on overly broad and abstract considerations. Instead of questioning, for example, whether social robots are ethically good or bad in general, we instead propose that different configurations of (and combinations thereof) the suggested dimensions entail different paradigmatic challenges with respect to ethical, legal and social issues (ELSI). We therefore encourage practitioners to consider these paradigmatic challenges when designing social robots to find creative design solutions.
1 Introduction
Soon, social robots will increasingly find their way into our households. In contrast to their counterparts – industrial robots – they will not be hidden behind protective fences but will interact and cooperate with their users. Such developments will come with many opportunities. In the near future, social robots might, for example, support the elderly in living independently. However, this also implies that these technologies will greatly impact our everyday lives. Therefore, they must be designed so as to not cause any physical or mental harm and instead promote our values and contribute to the well-being of individuals and society in general. Thus, it is increasingly important to investigate the ethical, social and legal implications (ELSI) of social robots.
The aim of this paper is to suggest a framework for categorizing social robots with respect to four dimensions relevant to an ethical, legal and social evaluation. We argue that by categorizing them thusly, we can circumvent problematic evaluations of social robots that are often based on overly broad and abstract considerations. Instead of questioning, for example, whether social robots are ethically good or bad in general, we instead propose that different configurations of (and combinations thereof) the suggested dimensions entail different paradigmatic challenges with respect to ethical, legal and social issues. We therefore encourage practitioners to consider these paradigmatic challenges when designing social robots to find creative design solutions.
The paper is divided into the following sections. In the first section, we introduce the general idea of categorizing social robots around dimensions that are relevant in an evaluative context with respect to ethical, legal and social implications. Thereafter, we explain each dimension and explore how different configurations within each dimension can be considered different ELSI relevant “character traits.” Finally, we explore paradigmatic challenges that are associated with specific configurations in each dimension with respect to ethical, legal and social issues and aim to encourage practitioners to find creative solutions in their concrete practices of designing social robots.
2 Categories to Distinguish Different Types of Social Robots
According to Breazeal, social robots are defined by being “designed to interact with people in a socio-emotional way during interpersonal interaction” [4].
Various suggestions regarding how to categorize social robots can be found in the literature on human–robot interaction (HCI). The categorization we present below aims at bringing researchers within the ELSI field together with professionals who work in engineering and with designing robots. We believe that our categorization facilitates such an exchange, because it relates ethical, legal and social issues to designable features of the specific robot.
In a recent report, the Office for Technology Assessment, for example, distinguishes three distinct categories: (1) social robots, (2) aid robots for nursing and (3) robots that help to maintain mobility [17]. Yet, these categories can sometimes be too narrow. The robot Pepper, for example, can be used as a social robot, as well as an aid robot for nursing [33], [46]. The report further differentiates between socially interactive and socially assistive robots. While socially interactive robots have been developed for interactions between humans and robots, socially assistive robots allow communication and serve to promote human–human interactions [11], [22].
Examples of social robots.
![]() |
![]() |
![]() |
Pepper, Softbank Robotics | NAO, Softbank Robotics | Miro |
https://commons.wikimedia.org/wiki/File:NAO_Robot_.jpg | https://commons.wikimedia.org/wiki/File:Pepper_the_Robot.jpg | http://consequentialrobotics.com/ |
Other categorizations focus on the form of interaction between robot and human [32] or on the specific tasks for which the robot is designed [15]. Our aim is not to reject these types of categorizations but rather to add a framework to categorize different robots to help in evaluating their ethical, legal and social implications. Furthermore, we do not claim that this method of categorization can exhaustively identify every ELSI related challenge regarding social robots but rather that our manner of conceptualizing social robots emphasizes ELSI aspects as part of the robot design process.
To identify ELSI relevant dimensions, we have analyzed sources and materials from eight different research projects on social robotics [5]. By closely examining the outcomes of ELSI workshops and analyzing the ELSI challenges that the projects themselves have identified, we can categorize the different problems and challenges into four different dimensions. The method used here is inspired by the general methodology of grounded theory [43]. We have first characterized the ELSI challenges with which the projects must deal and extracted abstract categories for similar problems. We then conceptualized these categories as gradable dimensions and used this framework to reexamine each project to characterize it according to its specific values in each dimension.
In the following, the dimensions are elucidated, and different paradigmatic challenges with respect to ethical, participatory and legal topics are discussed. We do not discuss ELSI challenges specific to the projects upon which our material is based but instead consider general challenges and problems associated with certain design features.

Method used for categorization.
2.1 Categories As Gradable Dimensions
We consider that the following dimensions of social robots liken character traits. This means that each dimension can be attributed an individual degree. Like a person who might be highly extroverted but also possesses further character traits (such as openness, etc.), we conceptualize social robots as possessing different values (or degrees) for different dimensions. This means that a social robot is, for example, not just an emotional robot per se but also has a certain degree of “emotionality” (ranging from 0 to 100). Likewise, social robots are not either wholly autonomous or completely not autonomous but rather have a certain degree of autonomy [26].
In the following sections, we introduce four distinct dimensions (autonomy, emotionality, sociality and impact on competences) to categorize different types of social robots with respect to ELSI relevant features. By considering paradigmatic challenges that can be associated with specific values in each dimension (for example, “high autonomy”), we wish to encourage designers to adopt ELSI considerations as an important part of the design process.
2.1.1 Degree of Autonomy
The degree of autonomy describes who is in control with respect to interactions between robot and user. Whereas a degree of autonomy of zero would represent a robot that is a mere tool and does not have the capacity to act without direct input form the user for each movement, a degree of 100 would represent a robot that makes its own decisions, follows its own plans and can begin the interaction by itself. Autonomy as a gradable dimension was introduced into the discussion of robot ethics by Floridi and Sanders [12] and developed further by Misselhorn [27]. This basal form of autonomy consists of (1) being able to interact with the environment, (2) having a minimal capacity to adapt and (3) being able to change internal states without external stimulus (therefore, this concept of autonomy should not be confused with the much more demanding Kantian meaning of autonomy).

Dimensions relevant to ethical, social and legal implications.
An example of low autonomy would be a typical cleaning robot. Here, the user specifies the days of the week, time frames and locations in which the vacuum cleaner shall work. The next highest level of autonomy would be, for example, a care robot that observes the user, creates movement and user profiles and uses these to interact with the user.
A high degree of autonomy could therefore go hand in hand with the creation of a user profile by the robot, whereas a robot that is capable of learning by itself and can be used universally would exemplify the highest degree of autonomy [6].
Ethical Issues
Autonomy is a central category when it comes to discussing the ethical implications of robots. In general, one could slightly change the well-known phrase to “with great autonomy comes great responsibility,” since the more autonomous a robot is, the more likely it is to need to make decisions that are morally relevant, such as deciding between helping a user complete an action that causes harm or disobeying the demands of the user [47]. Designing a robot with a high degree of autonomy but with no means to track morally sensitive situations and no capacity for moral reasoning or following moral rules can create highly problematic challenges [29]. Morally significant situations generally arise much earlier than the debate on autonomous driving or war robots sometimes suggests. In “weaker” scenarios, the robot need not make a life or death decision but rather – for example – decide how to act based on weighing various moral values [1]. And, even if the choices the system must make are not morally prohibited, they still often reflect a preliminary decision regarding ethically relevant values. A technical care system with a lower threshold for alarming relatives may, for example, be more secure but, on the other hand, more annoying, due to possible false positives and, in extreme cases, also lead to a problematic “normalization” of its user’s lifestyle [23].
Participatory Issues
A robot’s degree of autonomy has an implication on how it should be developed. Autonomy of a robotic system can have a positive effect on the productivity of a robot and can lead to the perception of it as a useful tool. It can also encourage the belief that the robot is smart and possesses an AI and therefore some sort of personality, but at the same time, it can be frightening for users, since its functionality can be difficult to understand [40], [52]. Autonomy of a robotic system has the effect that its users need not take care of the machine at all times; at the same time, this means that they are losing the control over it [41], [44]. This loss of control can result in a reduced understanding of what the robot is doing and a less accurate forecast of what it will do next. The robot becomes less transparent to the user.
This lack of transparency and understanding could lead to diminished trust for the robot and ultimately reduce its effective usage [3], [37]. To avert such a development, it is important to include the users in the development of the robot and tailor its abilities in accordance to the abilities that users expect. Also, limitations of the robot’s functionalities should be obvious, to avoid frustration and an extend of trust in the robot that is exaggerated.
Furthermore, it makes a difference whether the robot is used in public or non-public spaces. Public spaces are full of things and situations that are not in the control of the people in it, since they must be shared with others who might have their own agendas and bring things to these places that are not under our control. An autonomous robot might be more accepted in public spaces, since we are all used to being surrounded by situations and things that do not belong to us and because it is possible to retreat from public spaces to a private space or another public space.
The situation becomes a different one in private spaces, such as users’ homes. There, it is not possible for them to avoid confrontation. Users must interact with the robot and do not always know what happens with the data that is produced during the interactions, which might me quite sensitive data, like videos or microphone recordings. Therefore, it is crucial that what the robot is doing and planning to do is transparent.
To understand what users, expect from an autonomous robot and be able to design it in a manner that makes it transparent for them, it is crucial to gain a deep understanding of the stakeholders surrounding such systems. For this purpose, we propose the methods of Living Labs [10], a method that makes it possible to observe the usage of robots in real live environments over an extended period [10]. This method allows developers to gain deep insights regarding the usage of such systems and helps them understand what challenges such systems face.
Legal Issues (Privacy)
With a high level of autonomy, it can be presumed that the likelihood of personality infringements arises (e. g. if the robot acts fully autonomously, it makes decisions that are difficult or impossible for the user to predict and therefore lead to a lack of transparency). With self-learning social robots, the traceability of decisions is a possible challenge for practice. The General Data Protection Regulation (GDPR) only provides general regulations (in particular, Art. 22 GDPR) for the use of Big Data and artificial intelligence (AI), which in turn can be substantiated by national laws and the rules of conduct of professional associations (Art. 40 GDPR) [48]. To counteract these transparency problems, initial approaches already exist regarding how AI can be made comprehensible [49].
Concerning autonomous robots making their own decisions in the public sphere, the question is who is responsible for the data processing [28], [45], not only regarding potential liability issues associated with possible damage caused by the robot, but also regarding data protection responsibility. For example, if a care robot goes shopping for the user in a pharmacy, data processing may occur on the way to and in the pharmacy (another question is whether the robot is able/should be empowered to enter into a contract for its user at all). Those affected by data processing (passers-by on their way to, customers in or employees of the pharmacy) must know who to contact to obtain (further) information on the processing of personal data and to assert their rights in this regard. It should be noted that missing or incorrect information (in combination with a provision that depends on the will of the data subject, such as consent or the execution of the contract) could lead to unlawful processing [2], [42]. Who is responsible for data processing depends on who can decide on the means and purposes of the processing. An autonomous robot “decides” for itself, but only within the framework defined for it in advance.
In contrast to use in public spaces, the use of an autonomous robot in a private environment does not fall within the scope of the GDPR, under certain conditions (Art. 2 para. 2 lit. c GDPR – household exemption). One of those conditions is that the manufacturer/seller/programmer of the robot has neither access to the data nor any influence on the purpose and means of data processing. The user himself is responsible on his private property; he decides what, when and how the robot should do/process. If, in addition, no economic or professional connection with data processing exists, the GDPR would not be applicable (e. g. no data subject rights would have to be considered) [9].
2.1.2 Degree of Emotionality
How we define the degree of emotionality is divided into the two subcategories “active emotionality” and “passive emotionality.” A robot with active emotionality can show emotions and trigger empathy. The level of passive emotionality represents the extent to which the robot can read the emotions of its users.
Ethical Issues
A high degree of active emotionality means that technical systems can become objects of our empathy and that we may come to care about them. On the one hand, this need not necessarily be morally problematic, but it also does not automatically lead to an ethically improved technical system, on the other. Whereas interaction and communication with “emotionalized” robots can be more natural and fluent [31], it can also lead to a problematic manipulation of users. This might, for example, be the case with a robot that makes its users do things that they do not want to do by showing negative emotions (crying, whining) until users finally concede. Furthermore, the ability of an artificial system to read our emotional states raises ethical questions. For example, the extensive reading of our emotional state (also based on data that we would not initially associate with it) raises the question of whether we always want to be transparent for these systems.
Participatory Issues
The question of whether a robot should be emotional towards humans (active emotionalization) or should recognize the emotions of the humans surrounding it (passive emotionalization) is highly sensitive and must be answered by its users. In the case of recognizing someone’s emotions, sensitive information about the user can be revealed, which the user might not want to be spoken about by the robot or shared with others. In the case of a robot acting emotionally towards a user, some risk exists that it might be manipulative. For example, in the context of care homes and people with dementia, an emotional approach could be perceived as real and unique by the users, whereas in reality, it is a program played for every person who is in contact with the robot, an “industrialization” of personal care. While such a functionality could evoke the positive feeling of being recognized, the question remains whether this is something that is ethically good and desired by the caregivers and the family of the person. A similar situation could arise in any household. If the robot attempts to simulate emotions, a perceived relationship between human and robot could be stimulated, one which is based on program code. Again, this could be perceived positively by the user, yet it should be the user’s decision to what extent the robot should be simulating emotions.
To overcome this challenge, it might be useful to let the user engage in a kind of negotiation with the program. Therefore, an end-user development (EUD) program could be helpful [20]. This program would allow users to adjust the robot’s emotions, for example, in a way that the robot only shows the emotion of happiness or that it only shows specific emotional reactions to specific users who have authorized such behavior. In the context of passive emotionalization (reading emotions), this program could, for example, define what happens with the gathered data and whether the emotions should be read only in certain time slots, for example, during office hours or breakfast. Furthermore, it could be possible to adjust the reading of emotions to those emotions that other humans can also read but to forbid the reading of things that remain hidden to the eyes of humans, such as a person’s pulse or their typing speed. Through an EUD program, users would gain more control of the robot and be able to define the design features of the robot individually.
Legal Issues (Privacy)
One example for a robot with a high level of emotionality could be a robot that helps children with autism to interpret and learn emotions. Emotional manipulation to induce behavior could be an infringement of personality. Regarding the right of personality, it must be considered that, in the field of data protection, a data protection impact assessment must be conducted for such a robot, since “new technology” is used (Art. 35 para. 1 GDPR). This impact assessment identifies possible risks of data processing and tries to minimize the determined risks: The risk exists that the sensitive data of the “transparent patient” may be passed on to third parties (doctors, insurance companies, etc.) without authorization and that the patient may lose control of his health data [8], for example, if a doctor were to send pictures/sound recordings to colleagues for advice. If data were to be unintentionally disclosed, it could lead for example to a loss of confidentiality, health manipulation, physical and mental damage or commercial exploitation. The (health) facility may suffer the loss of reputation or financial damages. To prevent such negative outcomes, adequate technical and organizational measures (TOMs) need to be implemented. Furthermore, certain basic settings that specify maximum data protection (privacy by design/default) should be provided. Examples could be that the release for data transfer to third parties must be actively clicked on in each case [16]. Privacy by design/default could be realized, for example, if the AI generally conducts human-like emotional analyses. Only in case the user activates the advanced function the AI utilizes the full spectrum of what is technically possible to analyze emotions (e. g. analysis of data from facial expressions, posture, spelling or mouse movements).
2.1.3 Degree of Sociality
Whether and to what degree the robot can simulate social agency is represented by the dimension of “sociality.” The paradigmatic background for understanding a very high value of sociality is human–human interaction, which is sometimes defined as the reciprocal interaction of fully autonomous persons. However, other – less reciprocal – forms of social interaction exist, such as the relationships we have with our pets. Social robots with a high value of sociality not only interact with humans but so in a specific manner [39]. They should – for example – simulate being a friend or partner that understands and cares about its users.
Ethical Issues
As mentioned above, human–human interaction is often regarded as the paradigmatic background for creating social human–robot interaction. In cases in which the robot should not merely be a tool or service, the aim is often to design and develop the robot as a companion, friend or roommate. However, research regarding participatory design shows [7], [46] that the robot could be much more effective as a facilitator or mediator for human–human interaction than as a replacement for social interaction. Keeping this momentary state of art in mind, we should be careful not to create robots with only a very diminished capacity of social interaction and sell them as substitutes for real human–human interaction.
Furthermore, every social interaction is part of creating our individual dispositions and scripts of how we interact with other humans [24], [25]. From this perspective, the question arises regarding what types of effect (diminished) forms of social human–robot interactions have on our dispositions to interact with our fellow humans. Having a friendly robot companion at home that simulates various forms of social interactions but never objects or questions the orders of its users may have negative effects on how users are disposed to interact with others. One must also keep in mind that certain moral restrictions arise from empathizing with robots, even if one knows that they do not really have emotions and only simulate such [30].
Participatory Issues
The role of robots in the social lives of humans is difficult to determine. Some people accept the robot as a social agent, while others repel this concept. Acceptance as a social agent is highly dependent on the predetermined usage and the actual user. The intended role of the robot impacts how it should be developed. If the robot is designed as a functional tool that has specific tasks, it becomes less important to develop the robot applications together with its users, since its task would be to fulfill a specific duty, such as assembling car parts or mowing grass. However, if the robot is designed to have a social role, it becomes crucial to understand the users and their needs and limits.
To do so, it is important to consider the role that users would prefer their robot to occupy. Furthermore, it is important to understand how a robot should act in specific social situations. Determining this can be quite difficult, as some potential users have possibly never been confronted with a social robot. Yet, it is often just these groups that are the preferred targets for user research. Recently, for example, much research has been conducted regarding elderly people in interactions with social robots (e. g. [19] or [36]), as this group presumably needs socio-technical innovation but often have very limited encounters with robotic systems in their everyday lives.
It is crucial to work with users and develop these technical devices together with them to enhance their autonomy [29] and self-respect. To achieve a deep knowledge and understanding of how users interact with the robot, we propose working with design case studies [50], [51]. This method is divided into four phases. (1) The grounded design phase represents the first phase, in which it is important to get to know the users and their living spaces in depth. (2) Within the design phase, a certain use case is designed based on the knowledge gained and developed into prototype. (3) In the evaluation phase, the developed prototype is tested in the envisaged environment. During this time, developers observe the situation, as well as the user’s interaction with the robot, and interview the user to reassess whether the intended functionality has been matched. (4) In the iteration phase, based on the knowledge that has been gained in the third phase, the prototype is further developed. Once the adjustments are complete, the prototype is again tested in the envisaged environment and evaluated. This process of iteration is continued until the robot is sufficiently developed to work in a meaningful way, according to its intended social role towards its users.
Legal Issues (Privacy)
An example of a robot with a high level of sociality could be a care robot in a nursing home that tracks the interest and movement patterns of elderly people to determine the best time for each activity with the patient. Possible legal problems in relation to such a robot could be its approval according to the Medical Devices Act, as well as the violation of privacy or human dignity by the surveillance of the patient/user at all times to generate user profiles.
Considering data protection, it must be considered that a specific provision of the GDPR for creating user profiles applies to ensure lawful processing. This provision could be found either in Article 6, for general data, or Article 9, for sensitive data. However, it must be determined what sensitive or, respectively, health data constitutes. According to Recital 35 GDPR, health data “should include all data pertaining to the health status of a data subject which reveal information relating to the past, current or future physical or mental health status of the data subject.” Still, the question remains whether the combination of general data, such as the height, weight and sex of a person, is sufficient to represent such a sensitive data point. At least the body mass index could be determined with these parameters, which could therefore be interpreted as health data. It is therefore more appropriate to assume a broad interpretation of the term “health data”, since not much (combined) data is required to generate information on the health status of a person [18], [38]. It can therefore be assumed that sensitive data (health data) must be collected in the context of creating a user profile. The processing of health data is generally prohibited (Art. 9 para. 1 GDPR). If the robot is to process this nevertheless, an exception to this prohibition is required. This could be found, for example, within the framework of Art. 9 para. 2 lit. h GDPR. Accordingly, the prohibition does not apply if the processing is necessary for, for example, preventive health care, medical diagnostics or health care. In addition, processing must be conducted by or under the responsibility of qualified personnel, who must also be subject to professional secrecy (Art. 9 para. 3 GDPR). These conditions must be considering when determining which data should be collected for the user profile.
Alternatively, obtaining consent could also make the processing of health data lawful. To this end, the data subject must explicitly consent to the processing. Within the framework of consent, further requirements must also be fulfilled. For consent to be considered voluntary, for example, no imbalance should exist between the responsible person and the data subject. Therefore, the question of responsibility again arises here. Is the user himself responsible? Could any voluntariness at all exist in a nursing home–patient relationship [18]? It should also be noted that sector-specific regulations take precedence over the general regulations of the GDPR and the Federal Data Protection Act. Some factors are specific to the area of health data – it can be assumed, for example, that consent for the transfer of patient data can only be given voluntarily if this transfer is also permitted by/possible within the law [14]. In addition, the purpose of the processing must be stated so that an informed consent can be given. The question thus arises as to how precisely the purposes for data processing must be described in advance [18]. Health data is regularly used for a variety of purposes (some of them have not been established in the early stages of Big Data analyses). In addition to treatment, support, care and emergency assistance, purpose could also relate to health management, profitability control, quality assurance, pharmaceutical research or completely extraneous purposes, such as commercial advertising or official actions [48]. It should be noted that all applicable purposes (as well as the legal basis upon which the processing is based, according to Art. 13/14 GDPR) must be communicated to the data subject prior to the actual processing. A subsequent change of purpose is only possible under narrow conditions, which is why it is better to indicate too many possible/conceivable purposes, rather than too few, at the beginning of data processing. However, consent should also be formulated as narrowly and comprehensibly as possible to avoid overburdening the data subject.
2.1.4 Impact on (Social) Competences
Social robots interact with us, sometimes act for us and sometimes help us interacting with the world. The fourth dimension that turned out to be ELSI relevant involves the question of what impact the interaction has on the competences of users. This impact can be differentiated with respect to the following questions: (1) To what extent is the robot designed to be an enhancement or extension of competences (to enable us to do new things or old things better)? (2) To what extent should it compensate for lost competences (to again do things that one was once able to do)? (3) To what extent should the robot replace competences (to do things that one is still able to do)? The value for each dimension is subject dependent or user dependent, since the “replacement,” “enhancement” or “compensation” of competences is directed towards the specific competences of a specific user.
Ethical Issues
Robots can help us to regain competences we have lost, enhance our competences or replace the need to act out our competences. In every case, some caution should exist regarding the dependence on a technical system that is created by introducing and using social robots. In the worst-case scenario, users unlearn important competences when they would otherwise still be able to perform the necessary actions. From an ethical and social perspective, it should be clarified which dependencies are created to ensure that these cannot be exploited.
Another ethically relevant aspect involves identifying competences before attempting to enhance or replace them. One lesson that can be drawn from the discussion on elderly care robots, which should be clear by now, is that the competences of caretakers lie not only in performing certain (mechanical/bodily) actions (bringing food) but also in engaging in social and emotional interactions (see, for example, [13], [34]). Neglecting the latter would entail focusing on only one part of the whole situation, and therefore one might be drawn to the conclusion that replacement without essential loss is possible. Seeing the various competences involved enables developers to set their aim straight: robots might be able to replace or help with heavy and toilsome work to make room for specific human competences.
Participatory Issues
Competences are crucial for every person. When it comes to the development of robotic systems, the questions always exist regarding which tasks the robot should fulfill and whether these tasks are performed solitarily or in cooperation with users [21], [35]. Quite regularly, these developments follow the paradigm of what is possible from a technical point of view, and often, this logic does not consider that people enjoy certain tasks or that they will feel reduced if these tasks are fulfilled by robots and they are thus condemned to do only that which the robot cannot. Therefore, it is important to understand the needs and preferences of the users and gain a deep knowledge on how such robotic systems are implemented in the intended context.
For example, a robotic system that helps users with their grocery shopping, by finding specific items and collecting them from the shelves, could lead to the loss of competence to go grocery shopping without assistance or be able to find products in a supermarket. It must be observed and, in a second step, negotiated whether this competence is something that users would like to keep or can accept the risk of losing.
To do so, it is necessary for stakeholders to participate in product development. Making stakeholders co-creators will drastically deepen understanding about them and be helpful in understanding what competences users would like to maintain and which are less important.
3 Conclusion
As we hope to have made clear, we claim not that social robots are good or bad in general but rather that specific configurations lead to specific challenges with respect to ethical, legal and participatory issues. The suggested framework emphasizes that a range of design decisions with respect to autonomy, emotionality, sociality and impact on competences lead to ELSI challenges that must be addressed.
With regard to data protection the provision of correct information to data subjects, methods for explaining AI-decisions, TOMs and the compliance with privacy for design and default requirements are particularly important. Also, all purposes for data-intensive processing must be determined as early as possible and be justified by appropriate permissions.
In respect to participatory aspects, it is important to understand the preferences of the user, as well as the intended usage. To do so, not only is it important to know the user, but he should be integrated into the development, sometimes even being given the opportunity to change the robot following an EUD approach.
Regarding ethics, paradigmatic challenges or questions can be identified for each dimension: (1) High autonomy demands the capacity to make moral decisions. (2) A high degree of emotionality might manipulate users. (3) Which patterns of interaction are reinforced by interacting with a robot that possesses a high degree of sociality? (4) Does the robot promote the unlearning of important competences?
We conclude by emphasizing that practitioners can find creative design solutions for these challenges, as long as they are able to identify them in the first place. It is important that engineers and designers understand that they also make ethically relevant decisions when configurating the robots’ characteristics within these dimensions. This awareness should lead to a more reflexive design process, which also involves ethicists. In this way, it is possible to regard the ELSI dimensions not just as constraints but as a motor for better design.
About the authors

Tobias Störzinger is a researcher at the chair of Prof. Catrin Misselhorn (University of Göttingen) for the research project GINA since April 2019. He studied Philosophy, Politics and History at the University of Stuttgart. From 2014–2018 he was a graduate student at the Graduate School of Excellence Manufacturing Engineering (GSaME) where he started his Ph.D. Project „Metacognition in distributed intelligent Systems”.
Felix Carros studied at the University of Siegen and graduated with a Master of Science in Entrepreneurship and SME. In his final thesis he dealt with the exploration of health platforms in the context of an aging society. During his studies he worked for IBM. In this capacity, he focused primarily on the insurance industry and the industry-specific start-ups, InsurTechs & FinTechs. Since February 2018 he is employed at the Institute for Information Systems and New Media as a research assistant. There, his research interest lies in the application of social-interactive robotics. He is investigating how this can be used in the context of banking, religion and care. Furthermore, he accompanies other research projects in the field of robotics and advises on user integration in the development of robotic systems.

Anne Wierling studied at the University of Applied Science in Osnabrück and graduated with a Master of Law in 2018. In her final thesis she dealt with the implementation of the GDPR in SME. Since January 2019 she is a researcher at the chair of Prof. Dr. Becker (since 2020 at the chair of Prof. Dr. Krebs) at the University of Siegen for the research project GINA. Her research interest lies in data protection combined with AI in the health area.

Catrin Misselhorn is full professor of philosophy at the University of Göttingen since April 2019. From 2012 until 2019 she held the Chair for Philosophy of Science and Technology and was Director of the Institute of Philosophy at the University of Stuttgart. Before that, she was visiting professor at the Humboldt University in Berlin, the University of Zurich and the University of Tübingen. In 2003 she earned her doctorate at the University of Tübingen and her habilitation in 2010. 2007–2008 she was a Feodor Lynen Fellow at the Center of Affective Sciences in Geneva and at the Collège de France and the Institut Jean Nicod for Cognitive Sciences in Paris. Her research interests focus on philosophical problems of AI, robot and machine ethics as well as ethical aspects of human-machine interaction. She has led a number of third party funded projects particularly for the ethical evaluation of assistive systems in different fields, e. g. in nursing, in the workplace and in education (funded by DFG, BMBF, BMWi und Thyssenstiftung).

Dr. Rainer Wieching holds an MSc & PhD in Exercise Physiology. During the last 15 years of his professional career, he has headed a health care SME, being responsible for technical, medical & scientific aspects in global pharma marketing & medical education, especially concerning prescription drugs (cardiovascular, oncology), evidence based medicine (clinical trials, guidelines), and medical technology (ultrasound). He has successfully coordinated European R&D projects, participated in several IT/health related national research projects in Germany (BMBF) and has profound experiences in global health care market related aspects (pharma & medical technology).
References
[1] Michael Anderson and Susan L. Anderson. 2010. Robot be good. Scientific American 303, 4, 72–77. DOI: 10.1038/scientificamerican1010-72.Search in Google Scholar PubMed
[2] Matthias Bäcker. 2018. In Jürgen Kühling and Benedikt Buchner. Datenschutz-Grundverordnung/BDSG. Kommentar (2. Auflage). C. H. Beck, München; AA Franck. In Peter Gola. 2018. Datenschutz-Grundverordnung (2. Auflage). Art. 13, Rn. 37.Search in Google Scholar
[3] Deborah R. Billings, Kristin E. Schaefer, Jessie Y. C. Chen, and Peter A. Hancock. 2012. Human-robot interaction. In Proceedings of the Seventh Annual ACMIEEE International Conference on Human-Robot Interaction. ACM Digital Library. ACM, New York, NY, 109. DOI: 10.1145/2157689.2157709.Search in Google Scholar
[4] Cynthia L. Breazeal. 2004. Designing sociable robots. MIT Press.Search in Google Scholar
[5] Bundesministerium für Bildung und Forschung. Roboter für Assistenzfunktionen: Interaktionsstrategien. Retrieved from https://www.technik-zum-menschen-bringen.de/foerderung/bekanntmachungen/roboter-fuer-assistenzfunktionen-interaktionsstrategien.Search in Google Scholar
[6] Steffen Burk, Martin Hennig, Benjamin Heurich, Tatiana Klepikova, Miriam Piegsa, Manuela Sixt, and Kai E. Trost. Privatheit in der digitalen Gesellschaft. Dissertation. 116.Search in Google Scholar
[7] Felix Carros. 2019. Roboter in der Pflege, ein Schreckgespenst? In Mensch und Computer 2019 - Workshopband.Search in Google Scholar
[8] Carsten Dochow. 2019. Telemedizin und Datenschutz. MedR 37, 8, 636–648. DOI: 10.1007/s00350-019-5295-7.Search in Google Scholar
[9] Thomas Zerdick. 2018. In Eugen Ehmann and Martin Selmayr, Eds. DS-GVO. Datenschutz-Grundverordnung: Kommentar Art. 2 II lit. c, Rn. 10. (2. Auflage). Beck’sche Kurz-Kommentare. C. H. Beck; LexisNexis, München, Wien.Search in Google Scholar
[10] Mats Eriksson, Veli-Pekka Niitamo, and Seija Kulkki. 2005. State-of-the-art in utilizing Living Labs approach to user-centric ICT innovation – a European approach. Lulea: Center for Distance-spanning Technology. Lulea University of Technology Sweden: Lulea.Search in Google Scholar
[11] D. Feil-Seifer and M. J. Mataric. 2005. Socially Assistive Robotics. In 2005 IEEE 9th International Conference on Rehabilitation Robotics. IEEE, 465–468. DOI: 10.1109/ICORR.2005.1501143.Search in Google Scholar
[12] Luciano Floridi and J. W. Sanders. 2004. On the Morality of Artificial Agents. Minds and Machines 14, 3, 349–379. DOI: 10.1023/b:mind.0000035461.63578.9d.Search in Google Scholar
[13] Ann Gallagher, Dagfinn Nåden, and Dag Karterud. 2016. Robots in elder care: Some ethical questions. Nursing ethics, 23(4), 369–371. DOI: 10.1177/0969733016647297.Search in Google Scholar PubMed
[14] Dirk Heckmann and Anne Paschke. 2017. Datenschutzrechtliche Aspekte von Big Data-Analysen im Gesundheitswesen. In Björn Bergh, Ed. Big Data und E-Health, DatenDebatten, Band 2. Erich Schmidt Verlag, Berlin, 69–84.Search in Google Scholar
[15] Kathrin Janowski, Hannes Ritschel, Birgit Lugrin, and Elisabeth André. 2018. Sozial interagierende Roboter in der Pflege. In Oliver Bendel, Ed. Pflegeroboter, Springer Gabler, Wiesbaden, 63–87. DOI: 10.1007/978-3-658-22698-5_4.Search in Google Scholar
[16] Christian Katzenmeier. 2019. Big Data, E-Health, M-Health, KI und Robotik in der Medizin. MedR 37, 4, 259–271. DOI: 10.1007/s00350-019-5180-4.Search in Google Scholar
[17] Christoph Kehl. 2018. Robotik und assistive Neurotechnologien in der Pflege–gesellschaftliche Herausforderungen. Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag, Berlin.Search in Google Scholar
[18] Jürgen Kühling. 2019. Datenschutz im Gesundheitswesen. MedR 37, 8, 611–622. DOI: 10.1007/s00350-019-5291-y.Search in Google Scholar
[19] Jasmin Lehmann, Felix Carros, David Unbehaun, Rainer Wieching, and Jens Lüssem. 2019. Einsatzfelder der sozialen Robotik in der Pflege. In Nicolas Krämer and Christian Stoffers, Eds. Digitale Transformation im Krankenhaus. Thesen, Potenziale, Anwendungen. Mediengruppe Oberfranken, Kulmbach, 88–113.Search in Google Scholar
[20] Henry Lieberman, Fabio Paternò, Markus Klann, and Volker Wulf. 2006. End-user development: An emerging paradigm. In End user development. Springer, 1–8.Search in Google Scholar
[21] Gesa Lindemann. 2016. Social interaction with robots: three questions. AI & Soc 31, 4, 573–575. DOI: 10.1007/s00146-015-0633-4.Search in Google Scholar
[22] Wing-Yue G. Louie, Derek McColl, and Goldie Nejat. 2014. Acceptance and Attitudes Toward a Human-like Socially Assistive Robot by Older Adults. Assistive technology: the official journal of RESNA 26(3), 140–150. DOI: 10.1080/10400435.2013.869703.Search in Google Scholar PubMed
[23] Arne Manzeschke. 2015. MEESTAR–ein Modell angewandter Ethik im Bereich assistiver Technologien. In Technisierung des Alters–Beitrag zu einem guten Leben, 263–283.Search in Google Scholar
[24] Victoria McGeer. 2007. The regulative dimension of folk psychology. In Folk psychology re-assessed. Springer, 137–156.Search in Google Scholar
[25] Victoria McGeer. 2015. Mind-making practices: the social infrastructure of self-knowing agency and responsibility. Philosophical Explorations 18(2), 259–281. DOI: 10.1080/13869795.2015.1032331.Search in Google Scholar
[26] Catrin Misselhorn. 2013. Robots as moral agents. In Frank Rövekamp and Friederike Bosse, Eds. Ethics in science and society: German and Japanese views. Iudicium, München, 30–42.Search in Google Scholar
[27] Catrin Misselhorn. 2018. Maschinen mit Moral? Grundfragen der Maschinenethik. Reclams Universal-Bibliothek, 19583. Reclam, Stuttgart.Search in Google Scholar
[28] Catrin Misselhorn. 2019. Digitale Rechtssubjekte, Handlungsfähigkeit und Verantwortung aus philosophischer Sicht. Verfassungsblog: On Matters Constitutional.Search in Google Scholar
[29] Catrin Misselhorn. 2020. Artificial systems with moral capacities? A research design and its implementation in a geriatric care system. Artificial Intelligence 278, 103179. DOI: 10.1016/j.artint.2019.103179.Search in Google Scholar
[30] Catrin Misselhorn. 2020. Is empathy with robots morally relevant? In C. Misselhorn et al., Eds. Emotional machines – Perspectives from affective computing and emotional human-machine interaction. Springer, Wiesbaden, to appear.Search in Google Scholar
[31] Sven Nyholm and Lily E. Frank. 2019. It Loves Me, It Loves Me Not. Techné: Research in Philosophy and Technology 23(3), 402–424. DOI: 10.5840/techne2019122110.Search in Google Scholar
[32] Linda Onnasch, Xenia Maier, and Thomas Jürgensohn. 2016. Mensch-Roboter-Interaktion-Eine Taxonomie für alle Anwendungsfälle. Bundesanstalt für Arbeitsschutz und Arbeitsmedizin, Dortmund.Search in Google Scholar
[33] Amit K. Pandey and Rodolphe Gelin. 2018. A mass-produced sociable humanoid robot: pepper: the first machine of its kind. IEEE Robotics & Automation Magazine 25, 3, 40–48.Search in Google Scholar
[34] Jennifer A. Parks. 2010. Lifting the Burden of Women’s Care Work: Should Robots Replace the “Human Touch”? Hypatia 25(1), 100–120. DOI: 10.1111/j.1527-2001.2009.01086.x.Search in Google Scholar
[35] Nina Riether, Frank Hegel, Britta Wrede, and Gernot Horstmann. 2012. Social facilitation with social robots? In Proceedings of the seventh annual ACMIEEE international conference on Human-Robot Interaction. ACM Digital Library. ACM, New York, NY, 41. DOI: 10.1145/2157689.2157697.Search in Google Scholar
[36] Giuseppe Riva and Eleonora Riva. 2019. CARESSES: The World’s First Culturally Sensitive Robots for Elderly Care. Cyberpsychology, Behavior, and Social Networking 22(6), 430.Search in Google Scholar
[37] Tracy L. Sanders, Tarita Wixon, K. E. Schafer, Jessie Y. C. Chen, and P. A. Hancock. 2014. The influence of modality and transparency on trust in human-robot interaction. In 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA). IEEE, 156–159. DOI: 10.1109/CogSIMA.2014.6816556.Search in Google Scholar
[38] Stefan Schleipfer. 2015. Datenschutzkonformer Umgang mit Nutzungsprofile: Sind IP-Adressen, Cookies und Fingerprints die entscheidenden Details beim Webtracking? Zeitschrift für Datenschutz 9, 399.Search in Google Scholar
[39] Johanna Seibt. 2017. Towards an Ontology of Simulated Social Interaction: Varieties of the “As If” for Robots and Humans. In Raul Hakli and Johanna Seibt, Eds. Sociality and Normativity for Robots. Springer International Publishing, Cham, 11–39. DOI: 10.1007/978-3-319-53133-5_2.Search in Google Scholar
[40] T. Shibata, K. Ohkawa, and K. Tanie. 1996. Spontaneous behavior of robots for cooperation. Emotionally intelligent robot system. In Proceedings 1996 IEEE International Conference on Robotics and Automation. IEEE, Piscataway, 2426–2431. DOI: 10.1109/ROBOT.1996.506527.Search in Google Scholar
[41] T. Smithers. 1997. Autonomy in robots and other agents. Brain and cognition 34, 1, 88–106. DOI: 10.1006/brcg.1997.0908.Search in Google Scholar PubMed
[42] Samuel Strauss. 2018. Dashcam und Datenschutz. Eine kritische Gegenueberstellung von alter und neuer Rechtslage. NZV-Neue Zeitschrift fuer Verkehrsrecht 31, 12.Search in Google Scholar
[43] Jörg Strübing. 2004. Grounded Theory. VS Verlag für Sozialwissenschaften, Wiesbaden.Search in Google Scholar
[44] Kristen Stubbs, Pamela J. Hinds, and David Wettergreen. 2007. Autonomy and Common Ground in Human-Robot Interaction: A Field Study. IEEE Intell. Syst. 22, 2, 42–50. DOI: 10.1109/MIS.2007.21.Search in Google Scholar
[45] Gunther Teubner. 2019. Digitale Rechtssubjekte?: Haftung für das Handeln autonomer Softwareagenten. Verfassungsblog: On Matters Constitutional.Search in Google Scholar
[46] David Unbehaun, Konstantin Aal, Felix Carros, Rainer Wieching, and Volker Wulf, Eds. 2019. Creative and Cognitive Activities in Social Assistive Robots and Older Adults: Results from an Exploratory Field Study with Pepper. European Society for Socially Embedded Technologies (EUSSET).Search in Google Scholar
[47] Wendell Wallach. 2010. Moral machines. Teaching robots right from wrong. Oxford University Press, Oxford, New York.Search in Google Scholar
[48] Thilo Weichert. 2019. Praktische Anwendungsprobleme im Gesundheitsdatenschutz. MedR 37, 8, 622–625. DOI: 10.1007/s00350-019-5292-x.Search in Google Scholar
[49] Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth André. 2019. Do you trust me?. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents – IVA ’19. ACM Press, New York, USA, 7–9. DOI: 10.1145/3308532.3329441.Search in Google Scholar
[50] Volker Wulf, Claudia Müller, Volkmar Pipek, David Randall, Markus Rohde, and Gunnar Stevens. 2015. Practice-Based Computing: Empirically Grounded Conceptualizations Derived from Design Case Studies. In Volker Wulf, Kjeld Schmidt and David Randall, Eds. Designing socially embedded technologies in the real-world. Computer Supported Cooperative Work. Springer, London, 111–150. DOI: 10.1007/978-1-4471-6720-4_7.Search in Google Scholar
[51] Volker Wulf, Markus Rohde, Volkmar Pipek, and Gunnar Stevens. 2011. Engaging with practices. In CSCW ’11. Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work. Association for Computing Machinery, New York, NY, 505. DOI: 10.1145/1958824.1958902.Search in Google Scholar
[52] Jakub Złotowski, Kumar Yogeeswaran, and Christoph Bartneck. 2017. Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies 100, 48–54. DOI: 10.1016/j.ijhcs.2016.12.008.Search in Google Scholar
© 2020 Walter de Gruyter GmbH, Berlin/Boston