Search for a command to run...
Abstract Virtual Reality (VR) applications aim to create an immersive virtual world, which demands a high level of visual realism. The analytical material models commonly used in VR often fall short of reproducing complex real‐world appearances. Recently, neural materials have emerged as a promising alternative, offering a compact yet effective representation of real‐world materials. Deploying neural materials on low‐power mobile VR devices poses significant challenges due to the computational complexity of neural networks and the high display resolution and frame rate requirements of VR devices (commonly 72+ frames per second). We address these challenges by leveraging texture‐space shading with spatiotemporal computation amortization, driven by a compact, coarse‐to‐fine neural material model of extremely low capacity. Thanks to our distillation training scheme, our compact neural materials achieve visual quality comparable to NeuMIP [KMX*21] at a much lower cost. Our method reaches over 90 FPS on a mobile VR device (Meta Quest 3) even under multiple light sources.