Bandwidth reduction by depth merging in stereoscopic video

The Oculus Go reduces CPU/GPU load in 3D rendering by rendering distant objects only once and showing them to both eyes. The reason is that human depth perception is limited to short distances.

The technique can also be used for stereoscopic videos of natural scenes, because depth can be estimated quite efficiently between left- and right-eye videos. But can this be exploited to save bandwidth? Or is the already-existing encoding use motion vectors between left and right eye already doing this perfectly?

What is a suitable cut-off depth? Does it differ based on motion intensity or contrast in the videos?

Topics: stereoscopic video, user experience, QoE

Mandatory courses: OS, networking, heterogeneous processor programming

Publisert 30. sep. 2019 10:19 - Sist endret 30. sep. 2019 10:19

Veileder(e)

Omfang (studiepoeng)

60