Search for a command to run...
Summary Color constancy allows us to perceive stable object colors under different lighting conditions by reducing the impact of lighting. Information about illuminant color could be derived from a white surface or a specular highlight. The “brightest is white” heuristic has been frequently incorporated in illumination estimation models, to identify illuminant color. Here, we tested an alternative hypothesis: we use structured changes in the proximal image to identify highlight regions, even when they are not the brightest elements in the scene. In computer-rendered scenes, we varied the reliability of “brightest element” and “highlight geometry” cues, testing their effect on a color constancy task. Each scene had a single spherical surface lit by several point lights with identical spectral properties. The surface had a uniform spectral reflectance but a noise texture that attenuated the reflectance by a variable scale factor. We tested three levels of specularity: zero (matte), low, and mid. Observers watched a 1.5-second animation and responded if color changes were due to illuminant or material changes. Discrimination performance for matte surfaces was nearly at chance level, as predicted. However, as specularity increased, performance improved significantly. Observers outperformed an ideal observer model who relied solely on the brightest element. Notably, when the specular region appeared on a dark part of the texture, observer performance improved even more—even though the brightest element heuristic would predict a decrease. When specular geometries were difficult to identify due to phase scrambling, observer performance significantly dropped. These results suggest that we do not simply rely on the brightest element, but rather utilize regularities of diffuse and specular components of the proximal image to solve surface and illuminant ambiguities.