Norwegian version of this page

Respire - Responsible Explainable Machine Learning for Sleep-related Respiratory Disorders

This project is funded by the Research Council of Norway in the "Fellesløft IV" framework (2022 - 2027).

Devices, like smart-watches, that can collect health data from "everybody" all the time, and machine learning (ML) to analyze this data will strongly impact future health solutions. They can enable low-cost large-scale screening and long-term monitoring of individuals to automatically detect changes in their health status, for early detection of undiagnosed diseases, and to personalize treatment of patients. If applied without reflection there are also substantial challenges, like (1) protection and control of use of collected data, (2) false alarms, health anxiety, overdiagnosis, subsequent overtreatment, and medicalization, (3) reliability, relevance, and validity of data analysis results, and (4) inability to explain results obtained with modern ML. This undermines basic ethical principles and legal rights and may hamper fruitful use of ML in the health sector.


These challenges and opportunities will be addressed by researchers from computer science, medicine, law, and ethics. The medical focus will be on sleep-related respiratory disorders, in particular for infants with respiratory obstructions in the upper respiratory tract and patients that receive via a mask long-term nocturnal mechanical support of ventilation.
The core of Respire will be a framework to define what good explanations are for different users (e.g., health professionals, patients, and ML developers), and how their quality can be evaluated. As groundwork we will investigate: (1) the use of monitoring data from mechanical ventilators and ML to improve and personalize treatment of patients, (2) consumer electronic for sleep monitoring of infants at home and ML to analyze the monitoring data, (3) major legal and ethical concerns, with focus on ethical principles of autonomy and privacy, EU data protection law and health legislation, and (4) the relationship between detecting and defining entities such as indicators, indexes, diagnoses, and diseases; potential challenges caused by false alarms.

Publications

  • Adams, Jonathan (2023). Defending explicability as a principle for the ethics of artificial intelligence in medicine. Medicine, Health care and Philosophy. ISSN 1386-7423. doi: 10.1007/s11019-023-10175-7. Full text in Research Archive

View all works in Cristin

  • Adams, Jonathan (2024). Epistemic Injustice and AI-Enabled mHealth.
  • Plagemann, Thomas Peter (2023). Respire - planlegging og gjennomføring av internasjonale samarbeidsprosjekter,
  • Plagemann, Thomas Peter (2023). Opportunities and Challenges of using Smart Devices and Machine Learning for Health Applications.
  • Plagemann, Thomas Peter (2023). Responsible Explainable Machine Learning for Sleep-related Respiratory Disorders.
  • Adams, Jonathan (2023). Explanation and transparency: bioethical resources for the future of AI in medicine.
  • Goebel, Vera Hermine & Plagemann, Thomas Peter (2022). Respire - Responsible Explainable Machine Learning for Sleep-related Respiratory Disorders.

View all works in Cristin

Tags: Explainable AI, Respiratory disorders, Ethics, Law
Published Jan. 6, 2022 12:55 PM - Last modified Apr. 11, 2024 1:27 PM