DVMS
Discrete Variational Multiple Sequence (DVMS) learning framework for the prediction of 360° head movements
This deep learning model allows for multiple trajectory prediction of head movements while experiencing 360° videos with a VR headset. The necessary libraries are bundled in a Docker image but can also be installed separately.
Prediction of head movements in immersive media is key to design efficient streaming systems able to focus the bandwidth budget on visible areas of the content. Numerous proposals have therefore been made in the recent years to predict 360° images and videos. However, the performance of these models is limited by a main characteristic of the head motion data: its intrinsic uncertainty. Here, we propose an approach to generate multiple plausible futures of head motion in 360° videos, given a common past trajectory. To the best of our knowledge, this is the first work that considers the problem of multiple head motion prediction for 360° video streaming. This asset introduces our discrete variational multiple sequence (DVMS) learning framework, which builds on deep latent variable models. We design a training procedure to obtain a flexible and lightweight stochastic prediction model compatible with sequence-to-sequence recurrent neural architectures. Experimental results on 3 different datasets show that our method DVMS outperforms competitors adapted from the self-driving domain by up to 37% on prediction horizons up to 5 sec., at lower computational and memory costs.
While this model may be used to make a prediction of head movements from personal data, DVMS does not save nor send the data it processes to generate a prediction. The data is only processed internally with no interaction with the network.