Semantic Understanding of Machine Learning Predictions
The main objective of this MSc project is to provide a better understanding of the machine learning predictions using Semantic Web techniques.
A prominent research topic within the machine learning community is the explanation of the machine learning predictions : why input x was labelled as y? The are two (non disjoint) research categories under this topic:
- Opening the black box of the machine learning algorithms [2,3].
- Providing explanations to support/understand the predictions .
In this MSc thesis we aim at investigating the use of Semantic Web techniques (e.g., ) to provide a better understanding of the machine learning behavior to:
- Guide the generation of adequate positive and negative samples.
- Learn candidate Semantic Web rules.
- Explain the (approximate) reasons for a (positive/negative) prediction.
The target applications of the implemented techniques during the MSc project re: (1)link prediction within Knowledge Graphs, (2) Web table to Knowledge Graph matching, and (3) Ontology Alignment.
The thesis will be jointly supervised by Dr. Ernesto Jimenez-Ruiz and Prof. Martin Giese (Analytical Solutions and Reasoning (ASR) group and Centre for Scalable Data Access (SIRIUS)), Erik Bryhn Myklebust (Norwegian Institute for Water Research) and Jiaoyan Chen (University of Oxford).
The MSc project is also placed within the scope of the objectives of the Artificial intelligence for data analytics project.
If you are interested in this project please send an email to ernestoj [at] ifi.uio.no and we can arrange a chat. We can also discuss alternative projects to combine Machine Learning and Semantic Web techniques, and/or proposals relevant to the AIDA project
 Neural-Symbolic Learning and Reasoning: A Survey and Interpretation: https://arxiv.org/abs/1711.03902
 Semantic Explanations of Predictions: https://arxiv.org/abs/1805.10587