Projects on Self-Supervised Representation Learning
Representation learning aims to obtain meaningful spaces that hold important characteristics of the data. Annotations and labels, however, are expensive and hard to obtain to learn them supervisedly. Thus, learning representations without these supervised signals is of major interest to move towards methods capable of parsing large amounts of data. When the supervisory signal comes from the data and its related priors, we say that the methods are self-supervised.
This project aims to develop and explore different techniques for self-supervised learning, mainly on image data, using deep learning models. The possible areas of work are
- Contrastive learning
- Graph-based learning
- Probabilistic learning
where the learning framework is self-supervised. These areas are complementary and can be mixed depending on the problem or application.
The application area and the task will be defined based on the interests, background, and capabilities of the candidate.
- Excellent programming skills in Python and Pytorch
- Deep learning background
- Image processing or analysis background
- Knowledge of Git
The area of self-supervised learning is fast-paced and constantly moving. The article below summarizes the recent advancements in the field. The project, however, is not limited to what it mentions, and its scope and application will be tailored to the applicant and the current state of the art.
- Ericsson et al. "Self-Supervised Representation Learning: Introduction, Advances, and Challenges." 2021