Tutorial at IROS 2021 - Ethical and Legal Assessments Related to Robots and Systems

Artificial Intelligence (AI) technologies, including robots, pose challenges and opportunities for health- and home care. Amongst the relevant and essential aspects currently discussed are privacy, cybersecurity, safety, diversity, and inclusion considerations. It is also unclear what legal frameworks need to be followed to ensure user safety in highly robotic environments. The tutorial will overview the most pressing ethical and legal challenges surrounding the development and use of robots in human environments. The tutorial aims to raise awareness about these topics and engage with the community to think about ways to reduce the unfavorable impact on society. We will use the findings in an earlier launched review supplemented with recent work and initiatives in this respect. The tutorial will illustrate the challenges related to privacy, security, safety, and diversity through several examples from the University of Oslo and Leiden University.

 

Link to 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021): https://www.iros2021.org/. 

Image may contain: White, Art, Machine, Human leg, Plastic.

Robots and artificial intelligence demonstrate to effectively contribute to an increasing number of different domains, e.g., robots are being applied increasingly in close interaction with humans. At the same time, a growing number of people–in the general public and research–have started to consider several potential ethical challenges related to the development and use of such technology. There are also initiatives across countries like the European Commission appointed High-Level Expert Group on Artificial Intelligence (AI HLEG) that have a general objective to support the implementation of the European Strategy on Artificial Intelligence.

This tutorial will give an overview of the most commonly expressed ethical challenges and ways being undertaken to reduce their impact using the findings in an earlier launched review supplemented with recent work and initiatives. The tutorial will also introduce how existing and likely future laws regulate the development and use of robots. Ethical challenges related to privacy, safety, and security also correspond to legal issues, as laws impose binding requirements [2]. For example, there exist requirements for safety and security, and technical standards play a significant role in this context. Moreover, privacy compliance is a concern, for example, in the light of the European General Data Protection Regulation (GDPR). Compliance is a concern, as supervision agencies can impose relatively high fines in non-compliance with the GDPR. In addition to privacy, also cybersecurity is a legal concern for robotics.

The tutorial will also address how the EU will probably regulate AI and robotics in the following years. This is relevant for roboticists globally, as the EU is a suitable market for robots. Moreover, as the proposed European legislation is the first of its kind globally, this may be the blueprint for new laws in other world regions. Currently, it is expected that a new European law, to be proposed during the spring of 2021, will directly address artificial intelligence and, at least indirectly, robotics. This new law's two likely cornerstones are a focus on trustworthy AI (and robotics) and a risk-based approach. To achieve trustworthy technology, the European Parliament focuses on ethical principles for developing, deploying, and using artificial intelligence, robotics, and related technologies. The EU Parliament emphasizes ethical principles, so law and ethics need to be seen in conjunction. 

Image may contain: Furniture, Product, Chair, Flooring, Floor.

The risk-based approach to regulating AI and robotics seeks to legally regulate only high-risk AI (and robotics). Less risky technology development or use would thus not be regulated by this new law. Initial ideas regarding obligations for high-risk AI include requirements for training data, record-keeping, information to be provided, robustness and accuracy, as well as human oversight, meant to ensure 'trustworthy AI.' In addition, safety rules and non-discrimination will probably also play a role. For a developer of smart (AI-based) robotics, this means that new legal requirements will apply if the use of this technology is classified as risky. In that case, the technology development should already take into account these requirements.

The tutorial will further present the Universal Design (UD) principles [5] that so far have been used not only in physical environments for providing accessibility to people with disabilities but also in the design of web interfaces. Under the umbrella of Universal Design, the tutorial will include presentations and discussions around users' inclusion and diversity. Along the same lines, the tutorial will consist of an introduction to the European Directive 2019/882 on the accessibility requirements of products and services [3]. Specifically, the tutorial will show why, how, and when these principles can be valuable and relevant for the robot designers, engineers, researchers, and others working within the robotics domain.

Furthermore, these topics will be exemplified with concrete examples from current or previous research projects on how elderly and non-elderly users experience interaction with robots, privacy, and safety issues. This part of the tutorial will connect the aspects of UD and inclusion to ethics, ethics of care, roboethics, and some of the previous work carried out by leading researchers within the philosophy of technology on robots and care robots.

Image may contain: Joint, White, Product, Toy, Gesture.

Among the most important challenges are those related to privacy, safety and security. Countermeasures can be taken at different points in time while planning, designing, implementing, or using a robot, or a service including a robot. At the use time, there will be a need for the system by itself to perform some ethical reasoning if operating in autonomous mode. Specific attention will be needed if multiple, fully autonomous systems are to interact and make decisions together. We are currently undertaking research in various projects where the challenges appear, including in robots for elderly at home and mental health care technology. Responsible personalization is one of the main goals of these projects. The tutorial will introduce some examples from our own and others work and how the challenges can be addressed both from a technical and human side [1,6-8]. Ethical issues should not be seen only as challenges but also as new research opportunities contributing to more useful services and systems.

One example of the latter is to study a robot to be located in the home of an older person living without a partner [4]. That is with the goal of developing multi-sensor mobile robot solutions that can sense, learn and notify caregivers about abnormal events. We have focused on three crucial aspects of such a robot by addressing the sensing system, the motion control system and considering the design and behavior of a robot from a user perspective. We see that key challenges for such a robot relate to privacy, security and safety. The technological choices being made with regards to hardware and software both make an impact on how well the challenges are handled. Privacy needs to be compromised between limiting sensor data collection and the ability of a system to correctly notify the caregiver when some abnormal and emergency situation has happened. At the same time, light conditions in a home can vary a lot with sunlight at daytime and darkness at night-time. This puts certain restrictions on what sensors that are most effective and that multiple different sensors may be needed. However, rather than transmitting a lot of personal sensor data to a caregiver facility, the local robot should train a model to represent the regular activities of the person it supports. Then only in the case of abnormal events, the robot would transmit data out of the home. E.g., if the person does some daily exercises, a sudden increase in heart rate would be expected while it is not so for a person not usually doing any physical activity. Only in the latter case, sensor data should be forwarded out of the home but only data that is regarded as essential for the follow-up.

Acknowledgements

This work is partially supported by The Research Council of Norway as a part of Multimodal Elderly Care systems (MECS) project, under grant agreement 247697, the INTROducing Mental health through Adaptive Technology (INTROMAT) project under grant agreement 259293, Collaboration on Intelligent Machines (COINMAC) project, under grant agreement 261645, Vulnerability in the Robot Society (VIROS) under grant agreement 288285 and through its Centres of Excellence scheme, RITMO with project No. 262762.

References

  1. Selmer Bringsjord, et al., (2019). Hybrid Worlds: Societal and Ethical Challenges, CLAWAR Association Series on Robot Ethics and Standards. http://kryten.mm.rpi.edu/HybridWorlds.pdf
  2. Eduard Fosch-Villaronga and Tobias Mahler, Cybersecurity, safety and robots: Strengthening the link between cybersecurity and safety in the context of care robots, Computer Law & Security Review, Volume 41, 2021
  3. European Union, “Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services (Text with EEA relevance),” vol. 151. Official Journal of the European Union, Jun. 07, 2019, Accessed: Jan. 19, 2021. [Online]. Available: http://data.europa.eu/eli/dir/2019/882/oj/eng
  4. MECS, Multimodal Elderly Care systems (MECS) research project (2015–2021) funded by the Research Council of Norway, https://www.mn.uio.no/ifi/english/research/projects/mecs/
  5. M. F. Story, J. L. Mueller, and R. L. Mace, The Universal Design File: Designing for People of All Ages and Abilities. Revised Edition. Center for Universal Design, NC State University, Box 8613, Raleigh, NC 27695-8613 ($24), 1998.
  6. Jim Torresen, “A Review of Future and Ethical Perspectives of Robotics and AI.” Frontiers in Robotics and AI, vol. 4, article 75, 2018. https://doi.org/10.3389/frobt.2017.00075
  7. A. F. Winfield, Blum, C., and Liu, W. (2014). “Towards an ethical robot: internal models, consequences and ethical action selection,” in Advances in Autonomous Robotics Systems, eds M. Mistry, A. Leonardis, M. Witkowski, and C. Melhuish, 2014 Springer, pp. 85–96. https://doi.org/10.1007/978-3-319-10401-0_8 
  8. A. F. Winfield, Michael, K., Pitt J. and Evers V., "Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue]," in Proceedings of the IEEE, vol. 107, no. 3, pp. 509-517, March 2019. https://ieeexplore.ieee.org/document/8662743

 

Published May 21, 2021 12:02 PM - Last modified Aug. 24, 2021 9:55 PM