Search for a command to run...
Abstract Material selection is fundamental to the design process, as it significantly affects the cost, performance, appearance, manufacturability, and sustainability of a product. It is also a complex, open-ended challenge that forces designers to continuously adapt to new information, balance diverse stakeholder demands, weigh trade-offs, and navigate uncertainties to achieve the optimal outcome. Previous studies have explored the potential of large language models (LLMs) to assist in the material selection process, with findings suggesting that LLMs have the potential to provide valuable support. However, discrepancies between LLM outputs and expert recommendations indicate the need for further investigation. Recently agentic AI methods have been developed to address the limitations of standalone LLMs. AI agents integrate LLMs with external tools, allowing them to retrieve and analyze domain-specific information and iteratively refine responses. Other efforts to enhance LLM performance include the development of reasoning models that implicitly incorporate multi-step thinking processes for approaching complex tasks. This study compares standalone LLMs and agentic AI frameworks, both enhanced with a reasoning process, to evaluate how different approaches contribute to more effective emulation of expert decision-making in material selection. Our findings show that adding reasoning to LLMs increases token usage but may reduce performance differences across model sizes and prompting methods. We observed improved alignment to expert results with parallel prompting compared to previous non-thinking models, while the agentic framework showed reduced performance. These insights contribute to a broader understanding of AI integration in design workflows.