Search for a command to run...
This study investigates an inpainting-based data generation approach for urban change detection and analyzes its impact on the model generalization performance. The proposed method constructs temporally consistent before and after image pairs by applying inpainting to existing images, thereby enabling seamless integration with existing change detection frameworks. Rather than replacing real-world datasets, the generated data were designed to increase the training diversity and mitigate dataset-specific biases. Comprehensive experiments were conducted using multiple representative change detection architectures across different datasets. The results show that incorporating the generated data alongside real training samples improves the cross dataset generalization performance, particularly in unseen target domains, whereas the degree of improvement varies depending on the model characteristics. By contrast, training with the generated data alone leads to limited generalization when evaluated on real test sets. These findings indicate that inpainting-based synthetic data can serve as a complementary training resource for enhancing the robustness of urban change detection. Future work will focus on reducing the domain gap between synthetic and real data, generating more diverse change scenarios, and further improving the reliability of data quality assessment.