Disputation: Vegard Antun

Doctoral candidate Vegard Antun at the Department of Mathematic, Faculty of Mathematics and Natural Sciences, is  defending the thesis "Stability and accuracy in compressive sensing and deep learning" for the degree of Philosophiae Doctor.

Picture of the candidate.

Vegard Antun

The University of Oslo is closed. The PhD defence and trial lecture will therefore be fully digital and streamed directly using Zoom. The host of the session will moderate the technicalities while the chair of the defence will moderate the disputation.

Ex auditorio questions: the chair of the defence will invite the audience to ask ex auditorio questions either written or oral. This can be requested by clicking 'Participants -> Raise hand'. 

Trial lecture

30th June, 10:15, Zoom

Algorithm Unrolling and Deep Learning

  • Join the trial lecture (deactivated)
    The meeting opens for participation just before 10:15 (a.m.) and closes for new participants approximately 15 minutes after the trial lecture has begun.

Main research findings

Artificial intelligence (AI) is changing the world in front of our eyes, yielding the question: How reliable is modern AI, and can it be trusted? This thesis establishes how AI techniques used in imaging sciences, for example medical imaging, can produce highly untrustworthy outputs, yielding potential incorrect medical diagnosis. This can be explained in a mathematically precise way, demonstrating fundamental limitations of modern AI approaches.

Image reconstruction is the process of converting raw measurements acquired by a sampling device, such as an MR or CT scanner, to an image. Since its breakthrough in computer vision in 2012, deep learning has settled as a state-of-the-art tool in AI, with the potential to change both society and scientific computing. Traditionally, methods in scientific computing are based on two pillars: Stability and accuracy. In this thesis, we demonstrate that the stability pillar is typically absent in current deep learning and AI-based algorithms for image reconstruction, and we find that these techniques may add false positives or false negatives in the reconstructed images. Moreover, we design a framework which can explain mathematically why these instabilities occur for AI-based methods in imaging, and why standard algorithms typically are stable.

The figure illustrates how an AI algorithm adds false positives (sudden dark areas, indicated with red arrows) or reconstruct a completely different image (centre right image). Top row: Three images which to the human eye looks identical are sampled synthetically by an MR-scanner. Middle row: Given the synthetically sampled MR data, an AI algorithm reconstructs three very different images. Bottom row: A standard algorithm reconstructs three almost identical images. Experiment from Antun et al. "On instabilities of deep learning in image reconstruction and the potential costs of AI", Proc. Natl. Acad. Sci. USA, 2020.
An AI algorithm adds false positives (sudden dark areas, indicated with red arrows) or reconstruct a completely different image (centre right image). Top row: Three images which to the human eye looks identical are sampled synthetically by an MR-scanner. Middle row: Given the synthetically sampled MR data, an AI algorithm reconstructs three very different images. Bottom row: A standard algorithm reconstructs three almost identical images. Experiment from Antun et al. "On instabilities of deep learning in image reconstruction and the potential costs of AI", Proc. Natl. Acad. Sci. USA, 2020.

 

Published June 16, 2020 2:56 PM - Last modified June 30, 2020 1:42 PM