Search for a command to run...
Abstract Overfitting remains a significant challenge in deep learning, often arising from data outliers, noise, and limited training data. To address this, the Divide2Conquer (D2C) method was previously proposed, utilizing data partitioning as a structural regularizer . By training identical models independently on isolated data shards, this strategy enables learning more consistent patterns while minimizing the influence of global noise. However, D2C’s standard aggregation typically treats all subset models equally, failing to filter out edge models that have overfitted to local noise. Building upon this foundation, we introduce Dynamic Uncertainty-Aware Divide2Conquer (DUA-D2C) , an advanced technique that refines the aggregation process. DUA-D2C dynamically weights the contributions of subset models based on their performance on a shared validation set, employing a novel composite score of accuracy and normalized prediction entropy. This intelligent aggregation allows the central model to preferentially learn from subsets yielding more generalizable and confident edge models. In this work, we provide a rigorous theoretical justification for this approach, analytically demonstrating how dynamic parameter fusion reduces model variance. Empirical evaluations on benchmark datasets spanning image, audio, and text domains demonstrate that DUA-D2C significantly improves generalization. Our analysis includes evaluations of decision boundaries, loss curves, and ablation studies, highlighting that DUA-D2C provides additive performance gains even when applied on top of standard regularizers like Dropout. This study establishes DUA-D2C as a theoretically grounded and effective approach to combating overfitting in modern deep learning. The source codes for this study are available on GitHub at https://github.com/Saiful185/DUAD2C .