Search for a command to run...
Network regularization is a valuable approach to enhance network generalization. Unlike previous regularization methods that discard or mix regions at the image level, this paper proposes a regularization method called Feature Selection and Mixing (FSelectMix) based on attention. First, the FSelectMix method utilizes a dual-attention mechanism to select informative features crucial for task reconstruction. It generates adaptive confidence labels for the re-recognition of these features, enhancing the neural network’s learning potential. FSelectMix significantly enhances convolutional neural networks’ robustness and overall performance by operating at the feature level. Second, the proposed method introduces a multi-objective prediction task with a knowledge distillation network and an adaptive confidence dynamic adjustment strategy to leverage the reconstructed feature samples. This dual strategy not only refines the learning process but also ensures that the network adapts dynamically to varying levels of feature confidence, resulting in more reliable and accurate predictions. Finally, FSelectMix can be seamlessly integrated with existing data augmentation techniques, further boosting the model’s performance across different levels. We implemented FSelectMix on the CIFAR10, CIFAR100, and Tiny-ImageNet datasets, resulting in significant performance improvements in all three. The experimental results validate the effectiveness and demonstrate its potential for broad application. Codes are available at https://github.com/ZhugeKongan/FSelectMix.