Search for a command to run...
The rapid proliferation of accessible generative artificial intelligence (GenAI) presents a fundamental threat to the integrity of highway construction quality assurance (QA). Legacy systems are vulnerable to manipulation and fraud enabled by GenAI, especially through intuitive human–AI collaboration (“co-intelligence”). This paper introduces the Expert-AI Co-Research (EACR) framework—a structured way for a domain expert to work with GenAI across the research process—and applies it to conduct AI-augmented adversarial simulations that probe representative QA workflows. Four case studies provide, to the best of our knowledge, the first empirical demonstrations in the highway QA context that accessible GenAI can automate manipulation of testing data, including standardized binary file formats; help “game” specifications and procedures; generate or in-place edit photographic and technical QA imagery; and bypass built-in model safety features under adversarial prompting, with session-level “state persistence.” Harmful outputs were not operationalized, and operational details are withheld under responsible disclosure. Collectively, these findings show that GenAI democratized specialist-level manipulation, weakened the digital chain of custody, and elevated risk from isolated incidents to program-level decision risk. The paper contributes the EACR (framework) and AI-augmented adversarial simulations (method) as practical, repeatable tools for agencies to assess their systems, and provides the empirical basis for a necessary governance shift from “trust but verify” to “verify then trust”—emphasizing verifiable data origin, continuous monitoring, secure system design, and cross-disciplinary oversight. The approach offers agencies a concrete path to reassess and harden QA in the GenAI era.
Published in: Transportation Research Record Journal of the Transportation Research Board