Search for a command to run...
The Rubik’s Cube represents a canonical combinatorial search problem with an enormous discrete state space, making it a compelling testbed for evaluating algorithmic performance in artificial intelligence. Efficient cube-solving techniques have broader implications for robotics, automated planning, and optimization in resource-constrained environments. However, prior research has primarily focused on machine learning or phase-based solvers, with limited comparative analysis of classical search strategies. In particular, there exists no scalable benchmark contrasting uninformed methods such as Depth-First Search (DFS) and Breadth-First Search (BFS) with informed search like A* across varying cube complexities. To address this gap, we developed a modular Python simulator capable of handling cubes from 2×2×2 to 6×6×6 and generating randomized scramble sequences. We evaluated DFS, BFS, and A* under uniform depth and heuristic constraints, measuring solve time, memory usage, and solution length over 100 randomized trials per configuration. Results show that A* consistently achieves the best trade-off between speed and memory efficiency as cube complexity increases. BFS guarantees optimal solutions but exhibits prohibitive memory usage, while DFS maintains low memory consumption at the cost of longer, suboptimal paths. This work introduces a reproducible benchmarking framework for Rubik’s Cube solvers and provides baseline performance metrics for future research on classical and heuristic-guided search methods.