Rendering partially occluded virtual objects in real-time
SIFT is a famous algorithm that forms the basis for many algorithms that need to find objects in images, compare images or find camera positions in a space.
SIFT in this context describes the images using a large number of pixel descriptors consisting of 128 float values. The various comparisons compute distances between such descriptors to solve their challenges.
A while ago, we wrote PopSift, a real-time SIFT implementation. With an appropriate matcher, we will be able to compute camera positions in real-time, without any additional sensors or markers, and get a course understanding of objects in the real space. It will be possible to place virtual objects correctly occluded into the real space and render them in perfect depth position in real-time.
This set of theses aims at solving the 3 partial challenges:
- speed of SIFT feature matching
- fast understanding of depth using:
- simple SfM (structure from motion)
- SLAM (simultaneous localization and mapping)
- motion-vector-based depth estimation
- a method for assessing the quality of rendering partially occluded virtual objects into the real world
Languages use: CUDA, C++
Mandatory courses: Operating systems (INF3151), Networking (IN3230), Performance in distributed systems (IN5060), Heterogeneous processor programming (IN5050)