Search for a command to run...
Reinforcement learning models usually assume a stationary internal model structure of agents, which consists of fixed learning rules and environment representations. However, this assumption does not allow accounting for real problem solving by individuals who can exhibit irrational behaviors or hold inaccurate beliefs about their environment. In this work, we present a novel framework called dynamic structure learning, which allows agents to adapt their learning rules and internal representations dynamically. This structural flexibility enables a deeper understanding of how individuals learn and adapt in real-world scenarios. The dynamic structure learning framework reconstructs the most likely sequence of agent structures-sourced from a pool of learning rules and environment models-based on observed behaviors. The method provides insights into how an agent's internal structure model evolves as it transitions between different structures throughout the learning process. We applied our framework to study rat behavior in a maze task. Our results demonstrate that rats progressively refine their representation of the maze, evolving from a suboptimal, error-prone representation when learning a task, to an optimal, higher performance representation. Concurrently, their learning rules of slow learners transition from heuristic-based to more rational approaches. These findings underscore the importance of studying the combination of alternative learning rules and environment representations in complex behaviors. By going beyond simple reward-to-action associations, our research offers valuable insights into the cognitive mechanisms underlying decision making in natural intelligence. Dynamic structure learning framework allows better understanding and modeling how individuals in real-world scenarios exhibit a level of adaptability that current artificial intelligence systems have yet to achieve. (PsycInfo Database Record (c) 2026 APA, all rights reserved).