1. What is this standard?
This recommended practice describes the methodology and application of Compliance by Design in Human-Robot Interaction (HRI) regarding social robots. Its objective is to enable robot developers to efficiently apply ethical behaviors into the design process for an ethical robot design, and to help stakeholders reduce regulatory conflicts between existing regulations and emerging technologies regarding the use of social robots in human spaces.
2. Why is it important?
A critical challenge to AI governance is the gap between rapid AI technological development and slow legislative modification, or what we call the AI Pacing Problem. Different approaches to overcome the AI Pacing Problem have been adopted across countries around the world. For instance, the EU Artificial Intelligence Act incorporates a risk-based regulatory framework with harmonized standards to ensure its effectiveness for regulating emerging AI technologies. As for Japan, soft laws including AI ethical guidelines and principles are widely used as a basis to help AI companies realize their responsible self-governance. In this standardization project, I suggest a third approach to overcome the AI Pacing Problem called Design-Centered Governance.
Design-Centered Governance provides a structured approach to embedding ethical, legal, and social considerations into the design and deployment of social robots. Therefore, when utilizing a design-centered approach, the underlying AI Pacing Problem can potentially be mitigated by aligning technological innovation with societal and regulatory expectations. In the context of Design-Centered Governance, a new and promising key component is the integration of Legal Human-Robot Interaction (L-HRI), a concept extending traditional HRI by incorporating legal and ethical considerations into robot design, deployment, and use. These considerations include issues of privacy, human dependence, and robot deception. The idea behind L-HRI is to create robots on a Compliance by Design basis to ensure they operate within legal and ethical boundaries from the outset and across various social environments and applications, resulting in harmonization via standardization.
3. What is a real-world example or case study of how this might help?
Relying solely on state regulation to enforce privacy standards becomes increasingly difficult as the robotics field evolves. By emphasizing privacy-sensitive design from the outset, we create robots that inherently protect personal data and meet privacy expectations more flexibly and efficiently. Consider a social robot assisting a family with everyday tasks at home. Our standard helps define key metrics (e.g., regarding consent management and data handling) that guide the robot's design and deployment, ensuring it remains privacy-compliant in real-world settings with minimal human intervention.
On the other hand, anthropomorphic robots are expected to become more prevalent in all aspects of society, such as in working environments and in households. We must predict and try to prevent issues that could arise from their use, because it is hard to remedy them once they are commercialized. For instance, communication robots are now being used widely as pets or companions in households and care facilities. Our standard would ensure that these robots are designed in a way that doesn’t infringe on people’s privacy or make them dependent on robots.
Finally, healthcare robots frequently assist vulnerable populations, including the elderly, disabled, and chronically ill, who may encounter difficulties in accessing or using technology. By prioritizing inclusive design and ethical caregiving, our standard ensures that these robots are safe and accessible, enabling all patients to interact with the robots in a meaningful manner, regardless of their disabilities. One case could be a robotic system deployed in a senior living facility to assist with mobility, medication reminders, and health monitoring. Our standard would ensure the robot is equipped with features like fail-safe mechanisms to prevent accidents and human-centric interaction protocols to foster trust and support dignified care.
4. What stage is it at?
We are currently compiling a glossary with relevant key terms, creating use cases, and organizing thoughts on future direction of three P7017 subgroups on Privacy-Sensitive Robots, Anthropomorphic Robots, and Healthcare Robots. We also want to focus on cultural differences in how people perceive robots, because they affect public acceptance of robots. In addition, the P7017 Living Lab and P7017 Ethical Design Database have been established under the Institute for Advanced Study, Kyushu University, Japan in support of robot ethics standardization.
5. What is the current geographical or disciplinary spread of your working members?
P7017 has voting members from Japan, USA, India, UK, Malaysia, Brazil, and non-voting members from Japan, USA, UK, China, Norway, Finland, Poland, South Korea, Australia, Taiwan, Indonesia (countries are listed in order from most to fewest members). These recent statistics shows the diversity of geographical distribution of our working members. On the other hand, it unexpectedly brings us another challenge on choosing a proper time of meeting for members across many different time zones.
6. What type of people might be interested or well-suited for this standards group?
Our P7017 working group welcomes anyone who is interested in the ethical standardization for social robots via the Design-Centered Governance approach. We especially welcome people from the fields of law, public policy, philosophy, psychology, and robotics to join us. To encourage international collaboration on this standardization project, every November we organize the IAS-FRIS Symposium on Social Robots and Ethical Design in Kyushu University and Tohoku University. Here is the website of our recent IAS-FRIS Symposium 2024.
7. What triggered your own interest in this area?
I have been a PI of the Japanese Moonshot R&D Program Goal 3: Realization of AI robots that autonomously learn, adapt to their environment, evolve in intelligence and act alongside human beings, by 2050. My project was about the ELSI study for AI-enabled healthcare robots that can read patients’ psychological states and provide corresponding supports. With the encouragement from the Moonshot PD Prof. Toshio Fukuda (Past President of IEEE) and PM Prof. Yasuhisa Hirata (AdCom Member of IEEE RAS), I started my journey of the research on robot ethics design from the role of study group chair of ETHICALBOTS (2021-2022) into the working group chair of P7017 Recommended Practice for Design-Centered Human-Robot Interaction (HRI) and Governance (2022-Present).
My personal motivation to launch this project is to try a new design-centered approach to realize human-robot co-existence using socio-technical standards. Aside from P7017 WG, I am also serving as the co-chair of the Technical Committee on Robot Ethics under the IEEE Robotics and Automation Society. In the past years, I have also organized many workshops on robot ethics with my colleagues under IEEE ICRA and IROS. I believe robotics technology will hugely improve humans’ daily life on healthcare, transportation, entertainment, and education, but it is a double-edge sword. Hence, it is meaningful for me to promote the importance of robot ethics and standardization
8. Call to Action
If you are interested in joining the P7017 working group as a member, please feel free to contact me via [email protected] or contact the IEEE-SA program manager Christy Bahn [email protected] to indicate your interest to participate.