Search for a command to run...
Current few-shot learning research often assumes predefined task configurations and evaluates under fixed N-way K-shot settings. In realistic deployments, the target configuration is unknown at training time, and both way and shot can differ from the training setup. We characterize this mismatch as a structural shift between training and test episodes and refer to the resulting evaluation as the open-task setting. Models must extrapolate to unseen structural combinations rather than interpolate within a fixed grid. We formalize three regimes that expose this structural generalization: cross-way and cross-shot, which we term single-dimensional changes because they vary way or shot alone, and cross-way-cross-shot, a two-dimensional change that varies both; we also consider a cross-domain extension. We propose Open-MAML, a lightweight enhancement of gradient-based meta-learning that integrates (i) dynamic classifier construction to expand or contract the final layer on the fly without retraining, (ii) an inner-loop learning rate adaptation rule that scales with task size to keep fast adaptation stable, and (iii) AdaDropBlock, a structured regularizer that improves robustness without architecture-specific tuning. Across extensive within-domain and cross-domain evaluations, Open-MAML consistently outperforms train-from-scratch, fine-tuning, and a strong metric baseline, achieving typical absolute accuracy gains of 1-7% under single-dimensional changes and 3-6% under two-dimensional changes. These results position open-task evaluation as a necessary complement to fixed-design benchmarks and provide a reproducible basis for studying structural generalization in few-shot learning.