Search for a command to run...
This study investigates prompt programming for K-12 learners in anticipation of language-driven robot operations. We built an “AI programmer” platform that splits roles: learners plan and test in natural language (requirements, test criteria, defect reports) while a large language model (GPT40) designs and revises executable code. In a two-hour cameraapp workshop with 24 Japanese students aged 9-13 (no prior LLM programming), participants iteratively specified, generated, tested, and refined features. About 80% reached a self-defined challenge task; self-reports across process understanding, creativity, transfer, and post-interest averaged $\approx 3 / 4$, with the achiever group scoring $\geq 1$ point higher on completeness, transfer, interest after, and ease of addition/modification. Rapid arrival at a working artifact appeared to boost self-efficacy and creative expression. Frictions included response variability, fragile coordinate/dimension handling, and naming inconsistencies, motivating safety nets that combine automated tests, lightweight static checks, and self-repair prompts. Limitations include a single-country, short-term sample and one model; future work will broaden contexts and integrate automated testing pipelines.