Search for a command to run...
Abstract Image processing techniques can be used to modulate the pixel intensities of an image to reduce the power consumption of the display device. A simple example of this consists of uniformly dimming the entire image. Such algorithms should strive to minimize the impact on image quality while maximizing power savings. Techniques based on heuristics or human perception have been proposed, both for traditional flat panel displays and modern display modalities such as virtual and augmented reality (VR/AR). In this paper, we focus on developing and evaluating display power‐saving techniques that use machine learning (ML) in VR displays. We developed a U‐Net‐based technique paired with perceptual and power optimization loss functions that generates spatially varying dimming maps. These dimming maps are used to modulate input images, per‐pixel, to generate a power‐efficient image. Our pipeline was validated via quantitative analysis using image quality metrics and through a subjective study. Our subjective validation provides results scaled in perceptual just‐objectionable‐difference (JOD) units. This data, when rescaled, allows for comparisons of our technique with recent studies on VR display power optimization. Our results show that participants prefer our technique over a uniform dimming baseline for high target power saving conditions. This model and study serve as a template and baseline for future applications of deep learning to display power optimization. Model training code and data can be found at kenchen10.github.io/projects/mlpea/index.html .