Nettsider med emneord «Representation Learning»

Publisert 10. aug. 2023 21:24

In this project, we plan to answer this question "do we have some strangely and maliciously constructed images that if they are given to contrastive-based SSL methods and the training is done securely and faithfully, the final features become useless like random features" building on our recent work on robust and secure deep learning [8].  The question is  how much corrupted data is needed. Say if a malicious user uploads 1-2% corrupted data and breaks the learning process (features become like random features), it could be very alarming. But if a lot of corrupt images are needed, then it means that the current systems are quite robust.

Publisert 7. jan. 2022 13:59