Disputation: Vegard Antun
Doctoral candidate Vegard Antun at the Department of Mathematic, Faculty of Mathematics and Natural Sciences, is defending the thesis "Stability and accuracy in compressive sensing and deep learning" for the degree of Philosophiae Doctor.
The University of Oslo is closed. The PhD defence and trial lecture will therefore be fully digital and streamed directly using Zoom. The host of the session will moderate the technicalities while the chair of the defence will moderate the disputation.
Ex auditorio questions: the chair of the defence will invite the audience to ask ex auditorio questions either written or oral. This can be requested by clicking 'Participants -> Raise hand'.
Join the disputation (deactivated)The meeting opens for participation just before 13:15 (p.m.) and closes for new participants approximately 15 minutes after the defence has begun.
30th June, 10:15, Zoom
Algorithm Unrolling and Deep Learning
Join the trial lecture (deactivated)
The meeting opens for participation just before 10:15 (a.m.) and closes for new participants approximately 15 minutes after the trial lecture has begun.
Main research findings
Artificial intelligence (AI) is changing the world in front of our eyes, yielding the question: How reliable is modern AI, and can it be trusted? This thesis establishes how AI techniques used in imaging sciences, for example medical imaging, can produce highly untrustworthy outputs, yielding potential incorrect medical diagnosis. This can be explained in a mathematically precise way, demonstrating fundamental limitations of modern AI approaches.
Image reconstruction is the process of converting raw measurements acquired by a sampling device, such as an MR or CT scanner, to an image. Since its breakthrough in computer vision in 2012, deep learning has settled as a state-of-the-art tool in AI, with the potential to change both society and scientific computing. Traditionally, methods in scientific computing are based on two pillars: Stability and accuracy. In this thesis, we demonstrate that the stability pillar is typically absent in current deep learning and AI-based algorithms for image reconstruction, and we find that these techniques may add false positives or false negatives in the reconstructed images. Moreover, we design a framework which can explain mathematically why these instabilities occur for AI-based methods in imaging, and why standard algorithms typically are stable.