|
|
Perceptually Guided 3DGS Streaming and Rendering for Virtual Reality
IEEE/CVF WACV 2026
The proposed perceptually guided, adaptive level-of-detail 3DGS streaming and rendering framework for virtual reality.
Abstract
Recent breakthroughs in radiance fields, particularly 3D Gaussian Splatting (3DGS), have unlocked real-time, high fidelity rendering of complex environments, boosting broad applications. However, the stringent requirements of virtual reality (VR), including high refresh rates, high-resolution stereo rendering, and limited computing, remain beyond the reach of current 3DGS methods. Meanwhile, the wide field-of-view design of VR displays, which mimics natural human vision, offers a unique opportunity to exploit the limitations of the human visual system to reduce computation overhead without compromising perceived rendering quality. To this end, we propose a perception-guided, continuous level-of-detail (LOD) framework for 3DGS that maximizes perceived quality under given computational resources. We distill a visual quality metric, which encodes the spatial, temporal, and peripheral characteristics of human visual perception, into a lightweight, gaze-contingent model that predicts and adaptively modulates the LOD across the user's visual field based on each region's contributions to perceptual quality. This resource-optimized modulation, guided by both scene content and user gaze behavior, enables significant runtime acceleration with minimal loss in perceived visual quality. To support low-power, untethered VR setups, we introduce an edge-cloud rendering frame work that partially offloads computation to the cloud. The framework self-adapts to any edge device's network bandwidth and compute capabilities without requiring tedious retraining. Objective metrics and VR user study evidence that, compared to vanilla and foveated LOD baselines, our method achieves superior trade-offs between computational efficiency and perceptual quality.
|
Links
|
|
|