Search for a command to run...
Abstract Minimally verbal autistic individuals (mvASD) are often presumed to have severe cognitive and language impairments based on their poor performance on standardized assessments requiring voluntary motor responses, such as pointing. However, emerging evidence suggests that these individuals may possess latent cognitive abilities. Here, we introduce the Cued Looking Paradigm (CLP), a novel eye-tracking method that bypasses motor requirements by capturing automatic gaze responses to language-based stimuli. In our study, 35 minimally verbal autistic adolescents and adults were presented with spoken or written words, followed by a pair of images (target and foil) as their eye movements were recorded. Among mvASD participants with usable eye-tracking data (n = 30), the majority (80%) demonstrated hidden receptive language and reading abilities, as evidenced by eye-gaze measures, including temporal dynamics and spatial displacement, that were comparable to those observed in neurotypical controls. In contrast, the same mvASD individuals averaged only 57% accuracy when asked to read and point to the target picture, revealing a significant gap between reporting via pointing and actual lexical-semantic knowledge. Furthermore, pupil dilation analysis during tasks indicated reduced arousal recruitment in mvASD participants, potentially implicating dysregulation of the locus coeruleus-norepinephrine (LC-NE) system associated with the performance gap between pointing and eye-gaze. These findings challenge assumptions of global intellectual limitation while confirming specific lexical-semantic competence among mvASD individuals. Results highlight the need for, and provide, alternative- assessments that bypass manual motor responses. The CLP shows promise for revealing cognitive and language abilities, with important implications for both research and education. Significance Statement Standard language tests implicitly assume that a person can point or speak. Although minimally verbal individuals can point, this response may not reliably reflect comprehension. Using a simple eye-tracking task that replaces pointing with automatic gaze shifts, we show that most mvASD participants accurately match spoken or written words to pictures—even though they fail the same task when pointing is required. This finding challenges the assumption that absence of speech is typically associated with absence of understanding and reveals bias in common assessments. Tools that bypass manual motor demands by using eye movements, such as the Cued Looking Paradigm, together with changes in assessment and intervention, could transform diagnosis, guide education, and open new research avenues on covert language processing.