Stable representations in deep learning

Deep neural networks (DNNs) trained solely from data are known to suffer from instabilities. In the presence of some physical model of the observed system it is therefore natural to attempt to increase robustness by incorporating the prior knowledge.

In contrast to physics-informed NNs, which include the constraints into their loss function, physical models directly affect the architecture in Hybrid PDE-NN models such that the constraints are always satisfied. Despite this appealing property, theoretical foundations for the Hybrid PDE-NN models are currently underdeveloped.

The goal of this project is to establish theoretical frameworks for the Hybrid PDE-NN models in order to improve their robustness and practicability. We aim to analyze the stability properties of Hybrid PDE-NN models, with a particular emphasis on compatibility between the discrete (finite element) representation of the constraints and architecture of the remaining part of the model. In order to accelerate the learning process, novel training methodologies for the hybrid models utilizing the multilevel representation of the constraints will be developed.

Requirements

  • MSc in mathematics, with an emphasis on PDEs, numerical analysis or machine learning.
  • Candidates with knowledge and experience with numerical methods for PDEs, in particular finite element method, as well as machine learning and scientific programming will be prioritized.

Supervisors

Dr. Miroslav Kuchta

Professor Kent-Andre Mardal

Dr. Mikkel Elle Lepperød

Call 2: Project start autumn 2022

This project is in call 2, starting autumn 2022. 

Published Sep. 16, 2022 2:40 PM - Last modified Sep. 16, 2022 2:48 PM