Development and Assessment of Adversarially Robust Systems for Classification
Automated systems for image classification have improved significantly in the course of the last years and they have consequently found application in disparate fields, from customer recommendations to face detection. Recent research has shown that these systems are vulnerable to maliciously manipulated images, which, while seemingly perfectly legitimate to a human observer, may lead the system to take wrong decisions. This vulnerability raises obvious concerns whenever these automated systems are deployed in critical settings and it is now the topic of intense research.
The aim of this project is to understand, implement and compare possible solutions to mitigate this vulnerability in image classification systems. This would require to develop a solid understanding of the architectures of standard systems for image classification; to explore possible solutions that were proposed in the literature; to implement systems relying on existing frameworks and evaluate the results.