Explainable Artificial Intelligence

In many areas, there is a need to understand the way to a decision, like in medicine. Explainable Artificial Intelligence (XAI) is today mainly used to explain why the machine learning method made a specific decision, and in these master topics, we explore this concept further in order to improve understanding, trust and interpretability of AI model's results.

Topic 1: Explainable Artificial Intelligence to Draw Conclusions About the Dataset Population
Popular Explainable Artificial Intelligence (XAI) methods for image classification are useful to identify which parts of the image that were important for making a decision. In this project we will develop methods to analyze trends in these explanations over many classifications.
The overall aim with these analyzes is to be able to draw conclusions about the population in which the data used to train the machine learning method was collected from. A conclusion could for example be that “80% of the patients diagnosed with disease A separate from the healthy population by having … ”.

Topic 2: Global Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) are today mainly used to explain why the machine learning method made a specific decision. In this project we will develop XAI methods that can draw conclusions about more global properties of the machine learning model, and thus about the population in which the data used to train the machine learning model was collected from.

Explainable Artificial Intelligence (XAI) are today mainly used to explain why the machine learning method made a specific decision. For example, “this patient was classified as sick because …”. In this project we will instead explore methods that can analyze more global properties of the machine learning model to be able to draw conclusions such that “exercising reduces the risk of developing diabetes”. More specifically we will analyze how the input variables change the output of the model and how this varies in different parts of the feature/input space.

Topic 3: Deep Learning Explainable Artificial Intelligence Methods Going Beyond Visualization 
Popular Explainable Artificial Intelligence techniques (XAI) for image classification, such as GradCAM, are able to show important parts of the input image for making a decision. The methods however do not explain what characterizes these parts.

In this project, we will explore this issue. We hypothesize that it can be useful to establish a form of reference when doing explanations. For example, to explain why a deep learning method classified an image as sick, it is natural to use healthy controls as a reference. 

What do you learn:
Explainable AI
AI / Machine learning in general 
Statistics

Qualifications:
Hard working, motivated
Interested in learning (the rest can be learned during the thesis work)

Contacts:
Hugo Lewi Hammer <hugo@simula.no>
Inga Strümke <inga@simula.no>
Michael Riegler <michael@simula.no>
Pål Halvorsen <paalh@simula.no> 

Emneord: AI, masking læring
Publisert 21. sep. 2021 12:15 - Sist endret 21. sep. 2021 12:21

Omfang (studiepoeng)

60