| IEEE Open Journal of the Communications Society | 卷:1 |
| Motion Prediction and Pre-Rendering at the Edge to Enable Ultra-Low Latency Mobile 6DoF Experiences | |
| Xueshi Hou1  Sujit Dey1  | |
| [1] Department of Electrical and Computer Engineering, Mobile Systems Design Lab, University of California at San Diego, La Jolla, CA, USA; | |
| 关键词: Virtual reality; video streaming; six degrees of freedom (6DoF); edge computing; edge intelligence; motion prediction; | |
| DOI : 10.1109/OJCOMS.2020.3032608 | |
| 来源: DOAJ | |
【 摘 要 】
As virtual reality (VR) applications become popular, the desire to enable high-quality, lightweight, and mobile VR can potentially be achieved by performing the VR rendering and encoding computations at the edge and streaming the rendered video to the VR glasses. However, if the rendering has to be performed after the edge gets to know of the user’s new head and body position, the ultra-low latency requirements of VR will not be met by the roundtrip delay. In this article, we introduce edge intelligence, wherein the edge can predict, pre-render and cache the VR video in advance, to be streamed to the user VR glasses as soon as needed. The edge-based predictive pre-rendering approach can address the challenging six Degrees of Freedom (6DoF) VR content. Compared to 360-degree videos and 3DoF (head motion only) VR, 6DoF VR supports both head and body motion, thus not only viewing direction but also viewing position can change. Hence, our proposed VR edge intelligence comprises of predicting both the head and body motions of a user accurately using past head and body motion traces. In this article, we develop a multi-task long short-term memory (LSTM) model for body motion prediction and a multi-layer perceptron (MLP) model for head motion prediction. We implement the deep learning-based motion prediction models and validate their accuracy and effectiveness using a dataset of over 840,000 samples for head and body motion.
【 授权许可】
Unknown