Search for a command to run...
Augmented Reality (AR) systems that overlay digital instructions onto the physical world can enhance user performance in complex industrial tasks. Procedural instructions can often be derived from existing approved technical documentation, as reuse reduces authoring effort and ensures compliance with established standards. This paper examines the potential of incorporating a Large Language Model (LLM) to add an intelligent, conversational AR assistant to a static AR manual. We conducted an A/B user study (N = 36) evaluating the effects of adding a conversational assistance layer for a hands-on task on a juice mixer laboratory installation using a HoloLens 2. The baseline condition provided a PDF manual requiring manual step navigation, complemented by situated visual anchors. The second variant supplemented this interface with a conversational assistant that could interpret user queries, provide direct verbal guidance, and automatically jump to the relevant page in the manual. We evaluated these interfaces across three distinct scenarios: a linear task, a non-linear task requiring users to jump between pages for troubleshooting, and a task-handoff where users had to identify the current state of a partially completed procedure. Our findings indicate that despite longer total task completion times, participants using the AI assistant spent significantly less time actively working in the linear and non-linear scenarios, indicating improved task efficiency beyond system latency. Eye-tracking analysis supports the efficiency gain observation as the conversational interface allowed users to focus their visual attention on actively understanding and learning the procedural task. This study highlights the potential of LLM-powered agents for AR guidance, suggesting that overcoming system latency is the critical barrier to their practical deployment in industrial fields.