Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag July 19, 2022

Ethical, Legal & Participatory Concerns in the Development of Human-Robot Interaction

Lessons from Eight Research Projects with Social Robots in Real-World Scenarios

  • Felix Carros

    Felix Carros is a researcher at the Institute for Information Systems and New Media at the University of Siegen. There, his research interest lies in the application of social-interactive robotics. He is investigating how social robots can be used in the context of banking, religion and care and makes cross cultural comparisons between Japan and Europe. Furthermore, he accompanies other research projects in the field of robotics and advises on user integration in the development of robotic systems.

    EMAIL logo
    , Tobias Störzinger

    Tobias Störzinger is a researcher at the chair of Prof. Catrin Misselhorn (University of Göttingen) for the research project GINA since April 2019. He studied Philosophy, Politics and History at the University of Stuttgart. From 2014–2018 he was a graduate student at the Graduate School of Excellence Manufacturing Engineering (GSaME) where he started his Ph. D. Project “Metacognition in distributed intelligent Systems”.

    , Anne Wierling

    Anne Wierling studied at the University of Applied Science in Osnabrück and graduated with a Master of Law in 2018. In her final thesis she dealt with the implementation of the GDPR in SME. Since January 2019 she is a researcher at the chair of Prof. Dr. Becker (since 2020 at the chair of Prof. Dr. Krebs) at the University of Siegen for the research project GINA. Her research interest lies in data protection combined with AI in the health area.

    , Adrian Preussner

    Adrian Preussner graduated in HCI and was working as research assistant at the Institute for Information Systems and New Media at the University of Siegen. In his final thesis, he examined the attitudes and challenges of care workers who integrated social robots into their facilities. Besides Mixed Reality Applications in SME, his research interest lies in the interaction design of human centered AI technologies in the form of social robots, voice assistants and chatbots.

    and Peter Tolmie

    Peter Tolmie worked as an ethnographer and ethnomethodologist with several well-known figures in CSCW. He has conducted ethnographic studies across numerous settings. He has been published widely in both journals and conferences in the domains of Computer-Supported Cooperative Work, Ubiquitous Computing and Human-Computer Interaction.

From the journal i-com

Abstract

Research on Human-Robot Interaction is increasing as system become widely available and reached a level that enables smooth interactions. Yet, many research projects act in a silo mentality, in regard of participatory, ethical, or legal matters of social robotics. Knowledge about specific challenges is not universal and has often to be transferred from non-robotic contexts. We present findings in the three dimensions: participatory design, ethics for social robots and legal aspects. We accompanied eight research projects on social robots in real-world scenarios for three years. During that time, we spoke, observed, and helped (where possible) the research projects. This gave us specific insights into their work. Our findings work in three dimensions. In participatory design we observed that the trust relationship to users is essential to gain truthful insights and that a mixed method approach is promising. Regarding ethical aspects, we could see that ethical matters should be answered early on. And in the legal dimensions we noticed that the GDPR regulations are a challenge that often requires the help of experts. This work is reflecting on the observation of eight projects and is collecting lessons learned to help future projects and to learn from previous work.

1 Introduction

As the scientific, technological, and social development of social robots has progressed, questions about the ethical, legal and social implications of their deployment have gained attention [5], [12], [15]. Research regarding social robots is now an integral part of the human-computer interaction (HCI) landscape and social robots are increasingly being deployed in real-world settings.

The GINA[1] project worked on the described challenges and formed part of the funded framework “Robots for Assistance”, which was supported by the federal ministry of education and research. GINA aimed to oversee and support exchange between eight different research projects, as well as use participatory methods to generalize and deepen the findings related to the design and development of ethical, user-friendly, and robust Human-Robot Interaction (HRI).

Participatory design [14], [18] has become one of the standard approaches to project-based interaction research and is considered an essential part of uncovering the sociocultural environment of users, so that design concepts properly fit their needs. As its core is the idea that the development of technology should be done together with the people using the technology. However, user-acceptance and satisfaction in no way guarantees that technologies will cause no moral harm or be ethically sound. It has therefore been argued that separate ethical investigation should be undertaken as a central element in technology development [9], [12], [19]. At the same time, participatory design and ethical investigation will count for little if an innovative social robot is not allowed to be used due to a lack of legal conformity, with any breach potentially resulting in hefty fines. This makes GDPR[2] an essential consideration, not only for projects that intend to launch a ready-to-use product on the market, but also for those whose primary focus is research. Accordingly, legal assessment represents a critical component when developing robots within research projects.

Table 1

Overview of Projects with their objectives and approach.

Pseudonym Project Objectives and Approach Domain
P1 With the help of the robot Pepper, a new form of therapy for the development of socioemotional communication skills in autistic children was developed and tested. The project made use of affective computing to facilitate automatic recognition of emotional markers in a child’s voice, face and heart rate to underpin therapy scenarios where interaction between the robot and child could be individually adapted. Therapy
P2 The use of autonomous robotic systems designed to assist patients in various situations was investigated in rehabilitation clinics. Possible scenarios included transporting luggage or accompanying them to appointments. A stationary robotic arm was also developed that was intended to support staff when handling medical equipment. The overall aim was to build understanding and trust of robotics among patients and staff, so that it can better support all stakeholders in their everyday tasks and activities. Clinic
P3 In this project, a mobile robot was modified into a shopping assistant, so that it could support users searching for different products or in navigating through a supermarket. The robotic shopping assistant also had the capacity to shop autonomously. The overall goal was to implement and investigate different human-robot interactions for different user expectations. Shopping
P4 To relieve staff in nursing homes and support the residents in their everyday lives, a two-armed robot was developed and studied. In addition to serving and clearing tables, the robot was also intended to be able to offer drinks and snacks, as well as support users in shopping. Care home
P5 The goal of this project was to develop a social robot that could create positive experiences through lifelike interaction, so that the well-being of users could be improved. The robot was conceptualized as a roommate that could recognize the state of mind of a user and react accordingly and empathetically, so that an active dialogue between the robot and human could take place. Care home
P6 Here, a mobile robotic arm was developed that could safely and intuitively hand over potentially dangerous objects to people with impaired vision, such as filled cups or knives. The robot was studied in a hospital context. Medical
P7 The goal of this project was to develop generic interaction patterns for resident-robot interaction in the context of elderly care. These interaction patterns sought to be applicable to three different types of robots, so that positive social experiences could be generated according to a user’s needs profile through different types of activation. Care home
P8 In this project, interaction strategies for service robots in private and public spaces were researched. For example, a cleaning robot cleaning the floors in public spaces was designed to interact with passersby as needed and adapt its path accordingly. Possible interactions for a humanoid household robot were also investigated, with the goal being to support users in everyday tasks by serving as a cooperative and adaptive assistant. Public spaces

Each of the research projects overseen by GINA had its own specific challenges. These may, at first sight, appear unique. However, as will become clear, if one takes a cross-cutting perspective it is possible to identify several commonalities between the projects, which can be applied, in turn, to other research and development projects in this domain. We argue that the findings summarized and generalized in this paper can serve as useful lessons for a wide variety of applications in relation to ethics, law and the pursuit of participatory design. The relevant projects are summarized under pseudonyms in Table 1, together with a description of the methodological approach each of them adopted. All the projects had a duration of 3 to 3.5 years and have now been completed.

From the above table, it can be seen, that the research projects were highly diverse with regard to their different approaches and objectives. The methodological focus of this paper is upon a qualitative reflexion of these research projects and what lessons they might offer regarding ethics, legal, and participation in the design process. As a part of the exercise, an exchange workshop was organized in which the participating partners were able to share and discuss how their principal findings and experiences related to these concerns. The participants were all experienced research staff employed in academia and actively involved in their respective research projects. Due to the COVID-19 pandemic, the workshop took place online via a collaborative platform.

In addition to the exchange workshop, semi-structured interviews were conducted with stakeholders in the various project partners. As with the workshop, the focus was upon the areas of ethics, participation, and law. In the following, we present and discuss how findings from the research projects speak to three dimensions: ethical, legal, and participatory, each of which is covered in a separate section. The aim of this is to derive learnings from the projects, we therefore look at different methods in participatory design and how it influenced the projects, we continue with ethical aspects and how they implemented different concepts like MEESTAR and then look at legal aspects and how GDPR influenced the development of social robots. We conclude by summarizing the lessons learned in a table. Lessons that we learned while observing and discussing with the projects.

2 Participatory Aspects

Participatory design covers a wide range of topics and is applicable to many domains and subcategories. Its methodological aspects include user involvement, the relationship between participants and researchers and the sustainable aspects of research undertakings. We summarize below relevant results and lessons learned relating to the application of participatory design in the projects.

2.1 User Involvement

Involving users is a core concern in participatory design and was uniformly pursued in all the projects. One particular challenge concerned the appropriate time to involve users in the participatory process. Two strategies emerged in this regard. First, some projects saw advantage in involving users in design decisions early on, as these could then be more easily adapted and improved upon, thereby avoiding fundamental changes later on, incurring enormous effort. However, not all projects saw it this way. So, a second group of projects argued that the distance between imagination and reality meant that early visions of prototypes lacked maturity and that the anticipations of users could lead to them building up unrealistic expectations. This marked out a difference between projects that had developed a social robot from scratch and projects that bought a ready-made system and reconfigured it to their needs, without changing the robot’s fundamental design. For the latter, early user involvement was seen as useful for developing use cases, but not for developing more concrete elements. In addition, some argued that the involvement of users gave credibility to the development. Here, it was assumed that if users were involved right from the beginning this would ensure that the development process was user-driven, with the users becoming equal partners in the research and its outcomes. This was felt to increase the likelihood of developed systems being accepted, not only by the users directly involved, but other users as well, because this level of user-involvement could itself be treated as a way of marketing the system’s viability. However, this argument needs independent quantitative verification.

2.2 Methodological Aspects

Participatory design is characterized by a use of mixed methods, according to the specific objectives to be attained [14]. Mixed methods [10], [16] implies using a mixture of qualitative and quantitative studies. The choice of which approach to adopt first varied across the projects, though almost all used both somewhere along the way. The philosophical orientation of some projects was towards exploring the field quite broadly at first through quantitative research, with the needs for robot development being derived from large quantitative surveys. Others argued that the research field should first be uncovered and understood by qualitative methods. Both approaches are legitimate and just adhere to a different research paradigm. The choice of method primarily depends upon the topic of inquiry [6], [17]. If the focus is on the appearance of a robot, it may be best to first design the robot and then conduct a representative quantitative study to find out, whether the design is acceptable. This was overtly the position of at least one project. Early use of qualitative methods when trying to establish an object of appearance can be problematic because the design is something new and intangible, with the play of possibilities limited by people’s imaginations and potentially unclear. When it comes to the design of application scenarios, where the design is focused on the tasks the robot should fulfill, things are different. Many studies focused on robotic systems have demonstrated that the development of applications with users through qualitative approaches can produce positive outcomes (see, for instance, [1], [2], [3], [4], [13]). One of the downsides of qualitative studies is that only a small group is consulted and may only be representative of a small population. On the other hand, one of the downsides of quantitative studies is that respondents may not have understood the questionnaire correctly, so the framing of the questionnaire can be fateful for the results. Recurrent consultation of respondents within short time periods is also very costly. These arguments about the distinctions between qualitative versus quantitative research are, of course, not new, but they were replayed in the observed projects and had an impact upon the results obtained.

Another qualitative approach adopted by two of the projects was Wizard-of-Oz techniques [7], [8]. The original goal, here, was to overcome technical limitations by making it appear to a user that they were dealing with an autonomous robotic system, while in the background a human being was controlling the system and what the robot said or did. The argument put forward, in this case, was that this method allowed the researchers to obtain a robot’s perspective and thereby grasp both its limitations and its possibilities. This particularly helped when implementing adaptations to a robot, by making their impact easier to understand, as the person controlling the robot was seeing immediate changes in the reaction and interaction of users. It also allowed the projects to better understand the role of the robot and adapt it accordingly.

2.3 Trust Relationships

Trust is an essential part of participatory design because the researchers depend on the honesty of the users. This makes the work of establishing a trustful relationship critical. The projects stated that earning the trust of the participants was one of the greatest challenges. Paradoxically, the results in projects with a qualitative focus were sometimes overly positive because the subjects did not dare to voice criticism out of politeness. This limited the scope for iterative improvements in the design.

The relationship that evolved between the participants and the researchers over the course of these projects made it more likely the participants would express positive opinions. They didn’t want to risk offending the researchers by criticizing their work, so they sought to tell them what they thought they wanted to hear, rather than what they really thought of the design. This was not always the case but could be observed on different occasions when several parties were interviewed about the same interaction between a robot and a human. Care residents tended to state positive views of the interaction during interviews, while care workers put forward different views about what the residents liked and disliked. This was especially the case when the researchers were not choosing test subjects themselves and at random, but rather obliged to let host institutions, such as care homes, select the participants. This tended to drive participants to only reflect on the positive aspects of their experiences, while the negative ones were set aside. In qualitative research, the building of trust between researchers and their subjects is important. This ensures that the subjects are able to become accomplices in the development of robotic systems and that they are willing to accept this role.

A full and honest relationship provides a foundation for making it clear that criticism is desirable, without social consequences, and that it will assist in improving the design. At the same time, however, it must be considered that the role of such accomplices should not be overstretched by asking too much of them. It is therefore important that, when doing things like conducting interviews with older users, they should be kept as short as possible, with straightforward expressions and, if possible, not only online, but also in person and in places that are familiar to them. Online formats have proven to be useful, but they can be difficult to handle for users who do not have much affinity with technology. It is also harder to build a trustful relation online, leading to skewed results (as explained above).

In studies conducted with vulnerable groups, such as older people, variability in participation also must be accepted as par for the course. In the care settings, for instance, residents and care staff may temporarily or entirely drop out of a study for health or private reasons, in which case new arrangements will need to be made and fresh work devoted to create trustful relations to the new participants. In qualitative research this can result in new user groups putting forward completely different ideas and requirements. As qualitative research always works with small groups, it is not given that a newly recruited group will come to the same ideas and reactions towards the social robot.

2.4 Sustainable Development

The sustainability of projects relates to the continuation of deployed developments and to how functioning developments are left after the initial research phase (in this context, the term ‘developments’ refers to research objects, so here it is about robots and the software developed for them during the research project). One of the problems with research in real-world environments is that users become accustomed to developments and integrate them into their everyday practices. When the research project comes to an end, the developments often collapse because the necessary personnel and resources are no longer available. This is especially the case when a large part of the research is carried out by universities, where many research projects are only funded for a limited period of time. For an participatory design approach this is not always a good result, since the social robot was developed together and participants have been accustomed to the robot. It also stands in the way of future work, as participants might take it negatively that everything ended as soon as the funding of the research project finished.

Within the projects we were observing, we noted two principal approaches to sustainability. For some, the project consortia had been formed in such a way as to ensure that the development was taken over by a commercial company at the end of the research project. Here, the consortia consisted of several partners with different backgrounds, some from research institutions, others from companies. The role of the company was usually, among other things, to further develop and market the product after the research phase was completed. This approach provides a certain sustainability for users because, once a product is marketed, they will continue to have access to it on the open market. The second approach focused on publishing the results, so that external researchers and developers could benefit from the findings and take the work forward. For this, apart from published papers, the documentation of open-source approaches and the creation of long-term repositories can help to underpin sustainability. Public relations via social media and individual websites can also play a role here. We found that both tended to be used in tandem. Publishing is usually a requirement for research projects; bringing the product to the market is in the interest of the companies that are involved.

Table 2

Lessons Learned on Participatory Design.

Lessons Learned Description Possible Measures
Qualitative user research is suitable for application development. Qualitative studies provide the appropriate level of detail to understand the complex interaction between humans and robots. By building on these, robot development can be undertaken in a sensitive and nuanced way. Incorporate qualitative approaches throughout the research process.
Use Wizard-of-Oz approaches to understand a robot’s potential role. Wizard-of-Oz approaches help researchers grasp a robot’s perspective and the role it might play. By controlling a robot, they can acquire a deeper understanding of how it interacts with people. See Wizard-of-Oz approaches not only as a simulation method, but also as a resource for understanding Human-Robot Interaction.
Develop trust relationships with participants. In participatory design, study participants need to be as open as possible with researchers, so that the feedback is frank and honest. This makes improvements based on the feedback more robust. As research regarding social robots often takes place in relatively sensitive domains, developing trust relationships can be especially important. Fully communicate planned activities early on and verify how stakeholders will be involved, so that replacements can be found if any of them are likely to be absent.
Think about sustainability from the outset. The sustainable integration of a developed product can be difficult for universities to implement. After a project has run its course, participants often no longer have access to developed products because they are no longer available on the market. Before a project begins, clarify how its outcomes will be maintained and who holds rights to future development.

All projects evinced an interest in sustainability. Especially once they had worked intensively with end-users. However, not all were equally able to maintain it because it often depended on business decisions outside of their control. The problem for the research projects was that the developments need to become regular products in order to ensure that it becomes permanently available to end-users. Offering products is mostly not possible for research institutes, companies have to offer these products. Companies were included in the research projects, but not all of them decided to create products out of the research.

2.5 Lessons Learned and Possible Measures

Taken together, our results suggest a few lessons that can be instructive for future activities in the field of participatory design for Human-Robot Interaction. The lessons are based upon the results but are interpreted more broadly to ensure their generalizability. They indicate that the development of social robots requires a holistic approach with a strong focus upon end-user participation. The lessons are presented in the following Table 2.

3 Ethical Aspects

Ethical concerns range from anticipating and preventing moral harm (for example violations of the autonomy of users) to ensuring that an envisioned robot will genuinely contribute to the lives of users. Different projects tackle these issues in different ways. Some involve consortium partners who are specialized in ethics/moral philosophy and who are able to review the ethical aspects of prospective developments independently before they are implemented.

In the projects we observed, most of them tended to have recurrent ethics workshops, but without there being a consortium partner with an ethical background and who deals exclusively with ethical issues in the project. When conducting such workshops, it is highly common to use the MEESTAR workshop model. [11]. We also found that some projects drew upon external experts for this task, while others made use of internal resources. Apart from using MEESTAR as a framework for dealing with ethical questions, most projects also undertook the following activities as part of their ‘ethics toolbox’: expert interviews; the creation of user journeys; and structured exchange between project partners and potential users and stakeholders. The latter could involve (online) surveys, qualitative interviews, living labs and/or user testing.

3.1 Ethical Methods Used

When asked (in the exchange workshop as well as in the semi-structured interviews) about the extent to which the methods the projects used to tackle ethical aspects were helpful or profitable and what lessons could be drawn from the experience, the following picture emerged:

Successful ethical analysis was strongly associated with the degree to which potential users and stakeholders were involved early on. Most projects felt it was essential to involve as many partners as possible from the outset, so that they could cultivate feedback and, where necessary, iteratively adapt their envisaged solutions.

A vital factor influencing the successful handling of ethical aspects was seen to be the timing and scheduling of relevant activities. The projects felt that ethical aspects – as with all ethical, social, and legal implications (ELSI) – should be addressed as early as possible because technological or functional changes became harder and more expensive to tackle later on. It was even suggested that an ethical evaluation, together with assessment of a project’s overall ethical orientation, should be undertaken before project start. This should involve answering questions like: “What is the overall goal of this technical solution or product we are about to develop?” or “How and why is this solution beneficial for the users and society in general?”.

The projects also highlighted the importance of assigning a dedicated ethics work package in the project plan, with all of the relevant procedures, timelines and responsibilities properly identified. It was therefore suggested that ethics should remain an essential part of the application process for the funding of technological development projects. It was also argued that incorporating ethics experts in a project team should be standard practice. There was a general agreement that more time, attention, and responsibility should be assigned to the handling of ethical matters. Apart from this, it was pointed out that ethics itself lacks established and agreed-upon methods that might be systematically applied and that there was an urgent need to develop standardized ethics procedures.

3.2 Lessons Learned and Possible Measures

Before going on to list the overall lessons learned relating to ethics, it is worth examining more closely two of the points that arose during our analysis. One concerns the understanding of ethics partly expressed by the projects, the other concerns the suggested need for standardized methods. In our opinion, both lead to the same conclusion.

First, some of the projects saw engaging with users as a way of addressing the need for ethical input. Even if it is problematic prima facie for a technical solution to be directed against the well-being of users, a mere survey of user satisfaction cannot replace ethical reflection. Ethics is based on general principles that require continual self-reflection and decisions cannot be legitimized solely based on user satisfaction. So, in addition to ensuring the well-being of users, we also advocate more general reflection upon the ethical values, moral principles and social acceptability attaching to specific activities and proposed solutions.

Secondly, when it comes to the perceived need for a standardized method, although this is understandable, as is the associated desire for ethical guidance, it is in tension with notion of ethics itself. For a start, ethical challenges and opportunities vary greatly depending on the proposed technical developments, making it essential that they be evaluated individually. In addition, ethical evaluation is about making reasoned and justifiable decisions. Reasoned and justifiable decisions cannot be made by working through a checklist or a standardized procedure; they require argumentative judgment and qualitative analysis tailored to each specific case. The main issue concerns the application of general ethical principles and values to the specific situation and the making of reasoned trade-offs.

This underscores the need for cooperation with dedicated ethics partners, such as trained ethicists or philosophers. Such parties can deal with ethical aspects and be relied upon to use their expertise and methodological knowledge to make ethical effective considerations in consultation with the project team. There are important ethical aspects that cannot be identified simply by using user acceptance testing and basic assessments of ethical acceptability. Trained ethicists know best how to facilitate reflection on project objectives and the overall justifiability of envisioned products, thus adding another critical layer of evaluation.

Table 3

Lessons Learned on Ethical Aspects.

Lessons Learned Description Possible Measures
Engage in consideration of ethics early on. Addressing ethical issues at an early stage (preferably when applying for funding) ensures that potential moral harm can be averted and that the project can make a meaningful contribution to people’s quality of life. Undertake ethics workshops at the application stage, asking questions about the meaning and purpose of proposed developments, well as their moral implications.
Treat ethics as a separate work package. Dealing with ethical aspects needs time and resources. Not only should each project partner have work packages dedicated to this in their work plan, but one partner should also take lead responsibility for their realization. Establish clear responsibilities and work packages for ethics.
Involve user groups and stakeholders as soon as possible. Ethical evaluation also needs to take into account how users feel about a proposed product. As user consultation may necessitate changes of direction, their early involvement is to be strongly advised. Apply participatory methods at an early stage.
Ethics is not just about well-being. Ethics goes beyond focusing on the well-being of potential users. It must also involve making justifiable and reasoned decisions that are based on justified principles. Use ethics experts to facilitate joint reflection and accept their guidance.

4 Legal Aspects

With the introduction of the new GDPR, rules have been laid down on how, for example, data relating to test participants must be handled, so that they are adequately protected from third parties. This has resulted in considerable additional effort for researchers and represents a particular challenge for the design of participatory studies. Regarding the participant documents, data protection is seen as a very complex issue that goes beyond any current research interests. External advice from data protection experts is therefore required and this can involve significant extra expense.

4.1 Early Familiarization with Data Protection

For the design of technology and its use to be compliant research projects must adhere to GDPR regulations, as soon as sensitive user data (e. g., health data) is evaluated.

Some of the projects had a legal expert that they could contact, with questions regarding GDPR in either their consortium or in their associated companies or universities. They were able to contact these people on a regular basis and therefore, had no problems with consent, complying with privacy by design requirements and the use of robot speech recognition, or creating the inventory of their processing activities. Projects that were able to use existing templates or draw upon experience from previous projects also had less trouble with consent, establishing records of processing activities, or in establishing technical and organizational measures.

However, there were also projects that did not have a direct contact person or did not realize until too late that they had such a contact person. In these projects, the designated persons had to deal with the topic for the first time. If the person responsible for data privacy must first familiarize himself or herself with the subject area, he or she may not be able to correctly estimate the scope of the work to be performed in the context of data privacy. For instance: One project did not consider who else might be interested in the data when obtaining consent, which made it impossible for collaborating partners to use the data collected. A new data collection with newly formulated declarations of consent had to be carried out, which in turn led to a loss of time. Another project had problems with consent for a study. The consent only covered audio but not video recordings, which meant that the videos could not be used to evaluate the study, as it is not possible to obtain consent retrospectively. If a corresponding study, then must be conducted again, significant time problems arise. Time problems also arise, if, for example, processing times are not considered in the data protection concept (one project had found that, amongst other things, the pixilation of videos could take an unexpectedly long time).

The correctly estimate of the work and the fact that potential difficulties in the context of data protection are not recognized as such can lead to problems: For instance, one project indicated that person recognition might have been implemented via RFID chips instead of biometric recognition if it had been known earlier that the GDPR-compliant identification of persons could be a problem. A second project indicated that a learning system or database for voice control could have been realized through real-time processing if it had been known early on that there could be GDPR complications with the voice control that was initially planned. Also, people who are not familiar with the subject sometimes do not take important details into account or simply overlook them. For instance: Amongst the projects we were observing, not all of them were aware that a register of their processing activities had to be created in accordance with GDPR Article 30. It was also sometimes necessary to clarify when consent needed to be obtained and at what time certain kinds of data must be deleted. Another project was unclear about what personal data actually is, when information regarding Articles 13 and 14 of the GDPR must be provided, and what this information actually is.

If the person responsible still has to familiarize himself with the topic of data protection, it is possible that problems are not recognized at all or are recognized too late. If they are identified too late, there may not be time to find or implement alternatives. Further, legal questions within the consortium might not be answered competently and, more importantly, in a timely manner, so that delays can occur more frequently.

Overall, our observations of the various projects suggested that GDPR concepts should be directly addressed at the start of a project and then regularly adapted throughout its course. All too often, it is still assumed that a simple declaration formulated at the end of a project is sufficient to meet the requirements of GDPR. A rude awakening then comes when technicians inform the consortium that the corresponding requirements (for example deletion regulations) can only be implemented with an enormous expenditure of time and that, had these requirements been considered earlier, they could have been implemented much more easily. Therefore, a responsible person who is familiar with GDPR requirements should be on board right from the start. If such a person must first become familiar with the subject area, this has a negative effect on the timeline involved in moving from concept to design.

4.2 Handling GDPR-Based Insecurities

The projects dealt with legal ambiguities in a large variety of different ways. One project had an expert in the consortium who recommended avoiding anything critical. This is also the approach that most projects (six out of eight) have taken once they have faced difficulties: In one project, the initial goal was to distinguish groups of people by using cameras. The idea was that, when the robot approached people, it would be able to differentiate whether they were children, handicapped people, elderly, etc. This would then guide the distance the robot maintained from them. However, there was a concern that this kind of differentiation might not be GDPR-compliant (i. e., amounting to the collection of biometric data). It was therefore decided that this functionality should not be implemented. Another project decided that patient identification could not be implemented because of the principle of data minimization outlined in GDPR Article 5. Also, the recognition of visitors is a robot function that was not implemented by one project due to GDPR requirements. Personalized voice output, such as “Hello Dave, how was your surgery yesterday?”, was also not implemented because of the sensitive data required for this. An initial plan to capture camera recordings in a patient’s room was mentioned as another robot function that was set aside because of the GDPR. Another project stated that a permanently active camera was not implemented. A learning system and an internet connection for the robot were also not implemented due to GDPR requirements by another project. One project even subcontracted the legal considerations to a lawyer, who advised them to have the courage to try something new, as court proceedings would probably be unavoidable anyway when working in such an innovative domain.

Another way to deal with GDPR-based insecurities is to look for alternative functions of the robot: After a project decided against the implementation of biometric recognition of people due to GDPR requirements reassessment of the use case made it clear that it would be sufficient to recognize people by their clothing instead of biometric data. Another project stated that the robot, which is also supposed to go shopping for users, cannot bring the ordered goods to the ordering person as initially planned due to the GDPR requirements. As an alternative, the robot now brings the goods to the place where the order was placed. Also, due to GDPR requirements, the order is not saved as an audio file, but only as a text file, made possible by the use of speech-to-text.

With the benefit of having had an overview of all the projects, we would recommend proceeding in a way that will not cost as much as a lawsuit, while still avoiding having to discard ideas. Our suggestion is to seek help from public bodies such as supervisory authorities at an early stage. If the supervisory authority gives the green light, the project can go ahead without changes. If not, there is still a chance that the supervisory authority will give guidance regarding what adjustments are necessary for an implementation to proceed. If the response from the supervisory authority is wholly negative or not likely to be received soon enough, other approaches (such as finding alternative functions for the robot) can still be considered.

4.3 Lessons Learned and Possible Measures

In the following table, we present some overall lessons learned from across the eight projects that may be helpful for future projects when dealing with GDPR considerations in the social robotics domain. We also give some possible measures that might be undertaken to address the issues raised.

Table 4

Lessons Learned on Legal Aspects.

Lessons Learned Description Possible Measures
Think about GDPR from the start. Early processing of potential GDPR requirements (preferably in the application phase) helps to ensure that decisions do not have to be reversed and that targeted development can be undertaken from the start. Involve legal partners in the application phase and requirements analysis.
Allow time for data protection requirements to be properly met. Concepts associated with data protection can be time-consuming and require expertise to be properly realized. If this is not taken into account at the outset, additional expenses may be incurred. Allow dedicated person-months to cover the handling of data protection concepts.
Data protection concepts should clarify whether the data will be used for further research or cooperation partners so that appropriate precautions can be taken. If a partner is interested in using collected data later on, obtaining further consent can be complex and is not always possible, necessitating a new round of consent and data collection. It is important to know from the beginning who might be interested and to build this into the process. Plan for potential extended data use and canvass partners for any prospective interest early on.

5 Conclusion

This paper has presented an investigation of the participatory design, ethical and legal aspects of eight different social robot development projects. We have explicated the lessons learned for these categories regarding particularly significant issues that arose and have looked at what measures might be undertaken to handle them. These lessons can also serve as guidelines for future projects in the context of robot development. The paper has uncovered the following take home messages for robot development projects:

With regard to participatory design, a common feature of the projects was a desire for sustainability, such that the work undertaken could be continued for those who participated in the development. This also feeds into the relationships that need to be built with study subjects, so that a basis of trust can be established. In some cases, the same participants have worked for years to facilitate robotics research, so, having a development stop before the product is ready is unsatisfactory for both users and researchers. At present, innovations may not be available to participants even when research goals have been achieved.

From an ethical perspective, consideration of the ethical implications of a development needs to start as early as possible. This is best encompassed within a clearly defined work package with assigned responsibilities. It also helps significantly if there is a competent ethics partner in the consortium. It also needs to be remembered that, while participatory approaches might be an essential part of ethical evaluation, they cannot be a substitute for broader ethical reflection.

In relation to legal matters, early consideration of GDPR requirements is essential, including the identification of a suitable contact person for questions regarding GDPR. GDPR concepts relevant to a project need to be considered from the outset. This includes assessing whether cooperating partners might have an additional interest in collected data, so that this can feature as a part of the consent process. If doubts remain regarding the interpretation of individual GDPR requirements, the first step should be to try to obtain feedback from supervisory authorities, rather than risking long legal disputes or giving up on potentially valuable planned robot functionality.

In summary, rigorous research in real world environments underlies difficulties. Participating parties can change (e. g., care homes), new hurdles can emerge (e. g., GDPR), and framing conditions can be subject to upheaval (e. g., COVID-19). This makes it important that research projects retain some flexibility. However, potential disruptions cannot be easily anticipated in project proposals and incorporated into work packages. It may therefore be necessary to build in buffer periods that allow for setbacks or unexpected challenges.

One final consideration to raise is that many challenges are cross-project challenges, rather than issues that concern just one single project. Research in Human-Robot Interaction underlies specific challenges, as robots are less flexible than software and users often have specific assumptions of the abilities of robots, in addition, as it is an emerging technology ethical and legal aspects are not common knowledge, many problems have been addressed already but not openly enough be discussed within the research community. Adopting a silo mentality between competing project consortia is therefore not always wise. In this respect, a greater level of open science and sharing between projects is clearly desirable, so that the same problems do not have to be solved again and again by different parties.

About the authors

Felix Carros

Felix Carros is a researcher at the Institute for Information Systems and New Media at the University of Siegen. There, his research interest lies in the application of social-interactive robotics. He is investigating how social robots can be used in the context of banking, religion and care and makes cross cultural comparisons between Japan and Europe. Furthermore, he accompanies other research projects in the field of robotics and advises on user integration in the development of robotic systems.

Tobias Störzinger

Tobias Störzinger is a researcher at the chair of Prof. Catrin Misselhorn (University of Göttingen) for the research project GINA since April 2019. He studied Philosophy, Politics and History at the University of Stuttgart. From 2014–2018 he was a graduate student at the Graduate School of Excellence Manufacturing Engineering (GSaME) where he started his Ph. D. Project “Metacognition in distributed intelligent Systems”.

Anne Wierling

Anne Wierling studied at the University of Applied Science in Osnabrück and graduated with a Master of Law in 2018. In her final thesis she dealt with the implementation of the GDPR in SME. Since January 2019 she is a researcher at the chair of Prof. Dr. Becker (since 2020 at the chair of Prof. Dr. Krebs) at the University of Siegen for the research project GINA. Her research interest lies in data protection combined with AI in the health area.

Adrian Preussner

Adrian Preussner graduated in HCI and was working as research assistant at the Institute for Information Systems and New Media at the University of Siegen. In his final thesis, he examined the attitudes and challenges of care workers who integrated social robots into their facilities. Besides Mixed Reality Applications in SME, his research interest lies in the interaction design of human centered AI technologies in the form of social robots, voice assistants and chatbots.

Peter Tolmie

Peter Tolmie worked as an ethnographer and ethnomethodologist with several well-known figures in CSCW. He has conducted ethnographic studies across numerous settings. He has been published widely in both journals and conferences in the domains of Computer-Supported Cooperative Work, Ubiquitous Computing and Human-Computer Interaction.

References

[1] Azenkot, S., Feng, C., & Cakmak, M. (2016, March). Enabling building service robots to guide blind people a participatory design approach. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 3–10). IEEE.10.1109/HRI.2016.7451727Search in Google Scholar

[2] Carros F., Eilers H., Langendorf J., Gözler M., Wieching R., Lüssem J. (2022a). Roboter als intelligente Assistenten in Betreuung und Pflege – Grenzen und Perspektiven im Praxiseinsatz. In Pfannstiel, M. A. (eds) Künstliche Intelligenz im Gesundheitswesen. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-33597-7_38.10.1007/978-3-658-33597-7_38Search in Google Scholar

[3] Carros, F., Schwaninger, I., Preussner, A., Randall, D., Wieching, R., Fitzpatrick, G., and Wulf, V. (2022b). Care Workers Making Use of Robots: Results of a Three-Month Study on Human-Robot Interaction within a Care Home. In CHI Conference on Human Factors in Computing Systems (CHI ’22), April 29–May 5, 2022, New Orleans, LA, USA. ACM, New Orleans, LA, USA. https://doi.org/10.1145/3491102.3517435.10.1145/3491102.3517435Search in Google Scholar

[4] Carros, F., Meurer, J., Löffler, D., Unbehaun, D., Matthies, S., Koch, I., ... & Wulf, V. (2020, April). Exploring human-robot interaction with the elderly: results from a ten-week case study in a care home. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–12).10.1145/3313831.3376402Search in Google Scholar

[5] Carros, F., (2019). Roboter in der Pflege, ein Schreckgespenst?. Mensch und Computer 2019 – Workshopband. Bonn: Gesellschaft für Informatik e. V. https://doi.org/10.18420/muc2019-ws-588.Search in Google Scholar

[6] Choy, L. T. (2014). The strengths and weaknesses of research methodology: Comparison and complimentary between qualitative and quantitative approaches. IOSR journal of humanities and social science, 19(4), 99–104.10.9790/0837-194399104Search in Google Scholar

[7] Dahlbäck, N., Jönsson, A., & Ahrenberg, L. (1993). Wizard of Oz studies—why and how. Knowledge-based systems, 6(4), 258–266.10.1016/0950-7051(93)90017-NSearch in Google Scholar

[8] Dow, S., MacIntyre, B., Lee, J., Oezbek, C., Bolter, J. D., & Gandy, M. (2005). Wizard of Oz support throughout an iterative design process. IEEE Pervasive Computing, 4(4), 18–26.10.1109/MPRV.2005.93Search in Google Scholar

[9] Floridi, Luciano, & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d.10.1023/B:MIND.0000035461.63578.9dSearch in Google Scholar

[10] Kuckartz, U. (2014). Mixed methods: methodologie, Forschungsdesigns und Analyseverfahren. Springer-Verlag.10.1007/978-3-531-93267-5Search in Google Scholar

[11] Manzeschke, A., Weber, K., Rother, E., & Fangerau, H. (2013). Ergebnisse der Studie „Ethische Fragen im Bereich Altersgerechter Assistenzsysteme“ (neue Ausg). VDI.Search in Google Scholar

[12] Misselhorn, Catrin. (2018). Grundfragen der Maschinenethik. 3., Durchges. Aufl. 2019. Ditzingen: Reclam, Philipp, jun. GmbH, Verlag.Search in Google Scholar

[13] Šabanović, S., Chang, W. L., Bennett, C. C., Piatt, J. A., & Hakken, D. (2015, August). A robot of my own: participatory design of socially assistive robots for independently living older adults diagnosed with depression. In International conference on human aspects of it for the aged population (pp. 104–114). Cham: Springer.10.1007/978-3-319-20892-3_11Search in Google Scholar

[14] Schuler, D., & Namioka, A. (Eds.). (1993). Participatory design: Principles and practices. CRC Press.Search in Google Scholar

[15] Störzinger, T., Carros, F., Wierling, A., Misselhorn, C., & Wieching, R. (2020). Categorizing Social Robots with Respect to Dimensions Relevant to Ethical, Social and Legal Implications. i-com, 19(1), 47–57.10.1515/icom-2020-0005Search in Google Scholar

[16] Tashakkori, A., Creswell, J. W., (2007). The New Era of Mixed Methods. Journal of Mixed Methods Research, 1(1), 3. https://doi.org/10.1177/1558689806293042.10.1177/2345678906293042Search in Google Scholar

[17] Tuli, F. (2010). The basis of distinction between qualitative and quantitative research in social science: Reflection on ontological, epistemological and methodological perspectives. Ethiopian Journal of Education and Sciences, 6(1).10.4314/ejesc.v6i1.65384Search in Google Scholar

[18] Wagner, I. (2018). Critical reflections on participation in design. In Socio-Informatics. Oxford University Press.10.1093/oso/9780198733249.003.0008Search in Google Scholar

[19] Wallach, Wendell. (2010). Moral Machines: Teaching Robots Right from Wrong: Teaching Robots Right from Wrong. Oxford New York Hong Kong Madrid Toronto: Oxford University Press, USA.Search in Google Scholar

Published Online: 2022-07-19
Published in Print: 2022-08-26

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 1.12.2023 from https://www.degruyter.com/document/doi/10.1515/icom-2022-0025/html
Scroll to top button